Is it okay to follow the robots.txt on YouTube and access it from the program?

Asked 2 years ago, Updated 2 years ago, 127 views

Creating a BOT to regularly access YouTube using Python requests

YouTube's terms and conditions say something like this

Terms of Service: https://www.youtube.com/static?template=terms&hl=en&gl=US

You are not allowed to:
Abbreviated ~
3. access the service using any automated means (such as robots, botnets or scraps) exception(a) in the case of public search engines, in accordance with YouTube's robots.txt file; or (b) with YouTube's priority write permission;

Access the link in the robots.txt on YouTube that is not described by Disallow:
In this case, is it true that we comply with the rules?

Background:

  • The YouTube API is severely restricted and very difficult to create applications.
  • It is almost impossible to implement using the API because we are creating an application that communicates with YouTube on the client side without a developer's server.

python youtube-data-api youtube

2022-09-30 16:52

1 Answers

As a precondition, I am not a legal expert, so I have to ask an expert about the exact details, but if I receive the rules literally,

access the service using any automated means except in the case of public search engines, in accordance with YouTube's robots.txt file

As shown in the bold text, crawling is limited to search engines that anyone can use without prior permission from YouTube.In other words, it depends on the purpose.

*For those who come to see this Q&A later: The contents of the terms and conditions may change, so please make sure to check the latest terms and conditions.

They want to crawl because of the strict API restrictions, but there must be some reason for the restrictions, so it doesn't seem kind to the service publisher to intentionally avoid them.Consider applying to increase your YouTube API quota.


2022-09-30 16:52

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.