According to Adweek, the New York Times updated its Terms of Service on August 3rd to prohibit its content from being used in the development of “any software program, including, but not limited to, training a machine learning or artificial intelligence (AI) system.” That includes text, photographs, images, audio/video clips, “look and feel,” metadata, and compilations. The Verge reports: The updated terms now also specify that automated tools like website crawlers designed to use, access, or collect such content cannot be used without written permission from the publication. The NYT says that refusing to comply with these new restrictions could result in unspecified fines or penalties. Despite introducing the new rules to its policy, the publication doesn’t appear to have made any changes to its robots.txt — the file that informs search engine crawlers which URLs can be accessed. The move follows a recent update to Google’s privacy policy that discloses the search giant reserves the right to scrape just about everything you post online to build its AI tools.