About the Job:
Scrapinghub is looking for software engineers to join our Professional Services team to work on web crawler development with Scrapy, our flagship open source project.
Are you interested in building web crawlers harnessing the Scrapinghub platform, which powers crawls of over 3 billion pages a month?
Do you like working in a company with a strong open source foundation?
Scrapinghub helps companies, ranging from Fortune 500 enterprises to up and coming early stage startups, turn web content into useful data with a cloud-based web crawling platform, off-the-shelf datasets, and turn-key web scraping services.
Job Responsibilities:
- Design, develop and maintain Scrapy web crawlers
- Leverage the Scrapinghub platform and our open source projects to perform distributed information extraction, retrieval and data processing
- Identify and resolve performance and scalability issues with distributed crawling at scale
- Help identify, debug and fix problems with open source projects, including Scrapy
Scrapinghub’s platform and Professional Services offerings have been growing tremendously over the past couple of years but there are a lot of big projects waiting in the pipeline, and in this role you would be a key part of that process. Here’s what we’re looking for:
About you:
- 2+ years of software development experience.
- Solid Python knowledge.
- Familiarity with Linux/UNIX, HTTP, HTML, Javascript and Networking.
- Good communication in written English.
- Availability to work full time.
Bonus points for:
- Scrapy experience is a big plus.
- Familiarity with techniques and tools for crawling, extracting and processing data (e.g. Scrapy, NLTK, pandas, scikit-learn, mapreduce, nosql, etc).
- Good spoken English.
by via developer jobs - Stack Overflow
No comments:
Post a Comment