You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
That Scraper.new takes EITHER a url and a selector OR an array of URLs is confusing. Should keep both on new for backwards compatibility, but add a helper method for each pattern -- and use those helper methods in the README.
This will hopefully allay some of the confusion in #30 and address the API problems that were mentioned in #5 without such a dramatic refactor.
The text was updated successfully, but these errors were encountered:
Scraper#index will return a Scraper instance with (perhaps deferred for actual fetching later) on which a #scrape call will fetch the links on the index specified by the selector expression. Scraper#instances will return a Scraper instance on which a #scrape call will fetch the links on the index specified in the argument to #instances.
I think for 1.0.0 the Scraper returned by "index" will immediately fetch the index page, so that the Scraper can be added to other scrapers, see #35. For now, it'll still only be fetched on#scrape.
Hmm, if it makes requests on the first call (e.g. Scraper.new, Scraper.index), when are options set? I guess as a hash on that first call? That'll be a breaking change. So I'll cue that up for 1.0.0
That
Scraper.new
takes EITHER a url and a selector OR an array of URLs is confusing. Should keep both onnew
for backwards compatibility, but add a helper method for each pattern -- and use those helper methods in the README.This will hopefully allay some of the confusion in #30 and address the API problems that were mentioned in #5 without such a dramatic refactor.
The text was updated successfully, but these errors were encountered: