This would suggest that robots.txt doesn't disallow crawling for the URL of interest, or that YQL is seeing a different robots.txt than I am.
Is there a way to diagnose why YQL is denying this request? Might it be an issue with the wildcard in the third Disallow entry, since wildcards aren't supported by all crawlers?
Hmmmm. A little mysterious since that call appears to work now (and you are right, that robots.txt is fine for us).
In the future you might want to turn on debug=true in the console and have a closer look at the network traces for each call. YQL makes the robots call using its own agent and then fetches the page using a mozilla variant. Sometimes the first fetch can get blocked.