Friday, 3 July 2015

Mobile app developers “duped” into distributing data-scraping malware: NICTA

The surge in mobile malware has led many to condemn developers' poor security practices, yet recent NICTA research suggests that – even though data-stealing is ubiquitous among both paid and free Android applications – many mobile application developers are in fact being “duped” into incorporating data-stealing routines into their applications.

A methodical analysis of Android applications and source code found that all of the top 100 paid and non-paid apps in Australia were collecting personal information, with 60 percent of the apps incorporating some sort of tracking library and 20 percent of the apps featuring more than three different tracking libraries.

While many have blamed developers for their poor security, NICTA mobile systems research group leader, Aruna Seneviratne, who leads the organisation's Networks Research Group, told CSO Australia that many tracking libraries were inadvertently added when developers incorporated third-party libraries into their mobile apps.

“In most cases app developers just use third-party libraries and don't know what's in them,” he said. “They're not being malicious for the sake of being malicious; they are just being duped into doing a thing that collects a lot of information.”

 And collect they do. Apps analysed by the team – whose paper 'early detection of spam mobile apps' was accepted for presentation at the recent WWW 2015 conference in Florence, Italy – were siphoning all kinds of personal information off of users' mobile devices, often sending it to enlarge what have become massive databases of personal preferences and behavioural modeling.

“It's amazing how much information each of those apps collects,” he said, “and the scary thing is that most of them actually go to a small number of sources – which means these guys can actually infer a lot of information about you. They have a very good idea of who you are and what you're doing – and they are cross-matching the information they collect.”

Ever more-clever data-siphoning routines were making data collection richer all the time, with many Android apps now being designed with libraries that collect information about nearby Wi-Fi access points and can correctly extrapolate the user's location 90 percent of the time.

Read more: The week in security: Android apps collecting your location data, home routers hit by drive-by malware

Seneviratne blamed Google's relatively lax app-approval process for the proliferation of such apps, which join the malware-laden apps that by the team's figures account for around 3 percent of all Google Play Store apps.

Recognising that developers are often as clueless as users about the extent of the data collection going on, the team has proposed an app-rating system that will give consumers a better idea of what they're enabling by downloading and installing a particular app.

A basic prototype has already been developed and a pilot site is expected to be up and running by the fourth quarter of this year. The service, which rates apps on criteria such as privacy and security, will be available to third parties as a Web service that Seneviratne hopes will eventually help it gain traction on app-rating and other sites.

Read more: Surveillance laws driving companies to limit data collection, developers to boost security

“We've been working to come up with a scheme that is similar to the energy-ratings system that you have for electrical appliances,” he said, noting that the site will also seek to boost developers' security awareness by correlating app ratings “to let consumers know they can download an alternate app that has the same functionality but a higher security rating”.

Israeli developer-tools firm Checkmarx has taken its own approach to improving developers' security skills, recently learning extensive lessons as hackers worked to manipulate its Game of Hacks security application – which is now under development to be sold to large corporates for developer training and testing.

This article is brought to you by Enex TestLab, content directors for CSO Australia.

Read more: The week in security: Budget flags encryption troubles, cross-government IAM

Feeling social? Follow us on Twitter and LinkedIn Now!

Read More:

    Victorian Commissioner for Privacy and Data Protection sorts sheep from the goats

    Better than email: VISA launches FireEye threat intel platform for merchants

Source: http://www.cso.com.au/article/576533/mobile-app-developers-duped-into-distributing-data-scraping-malware-nicta/

Thursday, 25 June 2015

Data Scraping - Enjoy the Appeal of the Hand Scraped Flooring

Hand scraped flooring is appreciated for the character it brings into the home. This style of flooring relies on hand scraped planks of wood and not the precise milled boards. The irregularities in the planks provide a certain degree of charm and help to create a more unique feature in the home.

Distressed vs. Hand scraped

There are two types of flooring in the market that have an aged and unique charm with a non perfect finish. However, there is a significant difference in the process used to manufacture the planks. The more standard distresses flooring is cut on a factory production line. The grooves, scratches, dents, or other irregularities in these planks are part of the manufacturing process and achieved by rolling or pressed the wood onto a patterned surface.

The real hand scraped planks are made by craftsmen and they work on each plant individually. By using this working technique, there is complete certainty that each plank will be unique in appearance.

Scraping the planks

The hand scraping process on the highest-quality planks is completed by the trained carpenter or craftsmen who will produce a high-quality end product and take great care in their workmanship. It can benefit to ask the supplier of the flooring to see who completes the work.

Beside the well scraped lumber, there are also those planks that have been bought from the less than desirable sources. This is caused by the increased demand for this type of flooring. At the lower end of the market the unskilled workers are used and the end results aren't so impressive.

The high-quality plank has the distinctive look that feels and functions perfectly well as solid flooring, while the low-quality work can appear quite ugly and cheap.

Even though it might cost a little bit more, it benefits to source the hardwood floor dealers that rely on the skilled workers to complete the scraping process.

Buying the right lumber

Once a genuine supplier is found, it is necessary to determine the finer aspects of the wooden flooring. This hand scraped flooring is available in several hardwoods, such as oak, cherry, hickory, and walnut. Plus, it comes in many different sizes and widths. A further aspect relates to the finish with darker colored woods more effective at highlighting the character of the scraped boards. This makes the shadows and lines appear more prominent once the planks have been installed at home.

Why not visit Bellacerafloors.com for the latest collection of luxury floor materials, including the Handscraped Hardwood Flooring.

Source: http://ezinearticles.com/?Enjoy-the-Appeal-of-the-Hand-Scraped-Flooring&id=8995784

Saturday, 20 June 2015

Migrating Table-oriented Web Scraping Code to rvest w/XPath & CSS Selector Examples

My intrepid colleague (@jayjacobs) informed me of this (and didn’t gloat too much). I’ve got a “pirate day” post coming up this week that involves scraping content from the web and thought folks might benefit from another example that compares the “old way” and the “new way” (Hadley excels at making lots of “new ways” in R :-) I’ve left the output in with the code to show that you get the same results.

The following shows old/new methods for extracting a table from a web site, including how to use either XPath selectors or CSS selectors in rvest calls. To stave of some potential comments: due to the way this table is setup and the need to extract only certain components from the td blocks and elements from tags within the td blocks, a simple readHTMLTable would not suffice.

The old/new approaches are very similar, but I especially like the ability to chain output ala magrittr/dplyr and not having to mentally switch gears to XPath if I’m doing other work targeting the browser (i.e. prepping data for D3).

The code (sans output) is in this gist, and IMO the rvest package is going to make working with web site data so much easier.

library(XML)
library(httr)
library(rvest)
library(magrittr)

# setup connection & grab HTML the "old" way w/httr

freak_get <- GET("http://torrentfreak.com/top-10-most-pirated-movies-of-the-week-130304/")

freak_html <- htmlParse(content(freak_get, as="text"))

# do the same the rvest way, using "html_session" since we may need connection info in some scripts

freak <- html_session("http://torrentfreak.com/top-10-most-pirated-movies-of-the-week-130304/")

# extracting the "old" way with xpathSApply

xpathSApply(freak_html, "//*/td[3]", xmlValue)[1:10]

##  [1] "Silver Linings Playbook "           "The Hobbit: An Unexpected Journey " "Life of Pi (DVDscr/DVDrip)"       

##  [4] "Argo (DVDscr)"                      "Identity Thief "                    "Red Dawn "                        

##  [7] "Rise Of The Guardians (DVDscr)"     "Django Unchained (DVDscr)"          "Lincoln (DVDscr)"                 

## [10] "Zero Dark Thirty "

xpathSApply(freak_html, "//*/td[1]", xmlValue)[2:11]

##  [1] "1"  "2"  "3"  "4"  "5"  "6"  "7"  "8"  "9"  "10"

xpathSApply(freak_html, "//*/td[4]", xmlValue)

##  [1] "7.4 / trailer" "8.2 / trailer" "8.3 / trailer" "8.2 / trailer" "8.2 / trailer" "5.3 / trailer" "7.5 / trailer"

##  [8] "8.8 / trailer" "8.2 / trailer" "7.6 / trailer"

xpathSApply(freak_html, "//*/td[4]/a[contains(@href,'imdb')]", xmlAttrs, "href")

##                                    href                                    href                                    href

##  "http://www.imdb.com/title/tt1045658/"  "http://www.imdb.com/title/tt0903624/"  "http://www.imdb.com/title/tt0454876/"

##                                    href                                    href                                    href

##  "http://www.imdb.com/title/tt1024648/"  "http://www.imdb.com/title/tt2024432/"  "http://www.imdb.com/title/tt1234719/"

##                                    href                                    href                                    href

##  "http://www.imdb.com/title/tt1446192/"  "http://www.imdb.com/title/tt1853728/"  "http://www.imdb.com/title/tt0443272/"

##                                    href

## "http://www.imdb.com/title/tt1790885/?"


# extracting with rvest + XPath

freak %>% html_nodes(xpath="//*/td[3]") %>% html_text() %>% .[1:10]

##  [1] "Silver Linings Playbook "           "The Hobbit: An Unexpected Journey " "Life of Pi (DVDscr/DVDrip)"       

##  [4] "Argo (DVDscr)"                      "Identity Thief "                    "Red Dawn "                        

##  [7] "Rise Of The Guardians (DVDscr)"     "Django Unchained (DVDscr)"          "Lincoln (DVDscr)"                 

## [10] "Zero Dark Thirty "

freak %>% html_nodes(xpath="//*/td[1]") %>% html_text() %>% .[2:11]

##  [1] "1"  "2"  "3"  "4"  "5"  "6"  "7"  "8"  "9"  "10"

freak %>% html_nodes(xpath="//*/td[4]") %>% html_text() %>% .[1:10]

##  [1] "7.4 / trailer" "8.2 / trailer" "8.3 / trailer" "8.2 / trailer" "8.2 / trailer" "5.3 / trailer" "7.5 / trailer"

##  [8] "8.8 / trailer" "8.2 / trailer" "7.6 / trailer"

freak %>% html_nodes(xpath="//*/td[4]/a[contains(@href,'imdb')]") %>% html_attr("href") %>% .[1:10]

##  [1] "http://www.imdb.com/title/tt1045658/"  "http://www.imdb.com/title/tt0903624/"

##  [3] "http://www.imdb.com/title/tt0454876/"  "http://www.imdb.com/title/tt1024648/"

##  [5] "http://www.imdb.com/title/tt2024432/"  "http://www.imdb.com/title/tt1234719/"

##  [7] "http://www.imdb.com/title/tt1446192/"  "http://www.imdb.com/title/tt1853728/"

##  [9] "http://www.imdb.com/title/tt0443272/"  "http://www.imdb.com/title/tt1790885/?"

# extracting with rvest + CSS selectors

freak %>% html_nodes("td:nth-child(3)") %>% html_text() %>% .[1:10]

##  [1] "Silver Linings Playbook "           "The Hobbit: An Unexpected Journey " "Life of Pi (DVDscr/DVDrip)"       

##  [4] "Argo (DVDscr)"                      "Identity Thief "                    "Red Dawn "                        

##  [7] "Rise Of The Guardians (DVDscr)"     "Django Unchained (DVDscr)"          "Lincoln (DVDscr)"                 

## [10] "Zero Dark Thirty "

freak %>% html_nodes("td:nth-child(1)") %>% html_text() %>% .[2:11]

##  [1] "1"  "2"  "3"  "4"  "5"  "6"  "7"  "8"  "9"  "10"

freak %>% html_nodes("td:nth-child(4)") %>% html_text() %>% .[1:10]

##  [1] "7.4 / trailer" "8.2 / trailer" "8.3 / trailer" "8.2 / trailer" "8.2 / trailer" "5.3 / trailer" "7.5 / trailer"

##  [8] "8.8 / trailer" "8.2 / trailer" "7.6 / trailer"

freak %>% html_nodes("td:nth-child(4) a[href*='imdb']") %>% html_attr("href") %>% .[1:10]

##  [1] "http://www.imdb.com/title/tt1045658/"  "http://www.imdb.com/title/tt0903624/"

##  [3] "http://www.imdb.com/title/tt0454876/"  "http://www.imdb.com/title/tt1024648/"

##  [5] "http://www.imdb.com/title/tt2024432/"  "http://www.imdb.com/title/tt1234719/"

##  [7] "http://www.imdb.com/title/tt1446192/"  "http://www.imdb.com/title/tt1853728/"

##  [9] "http://www.imdb.com/title/tt0443272/"  "http://www.imdb.com/title/tt1790885/?"

# building a data frame (which is kinda obvious, but hey)

data.frame(movie=freak %>% html_nodes("td:nth-child(3)") %>% html_text() %>% .[1:10],

           rank=freak %>% html_nodes("td:nth-child(1)") %>% html_text() %>% .[2:11],

           rating=freak %>% html_nodes("td:nth-child(4)") %>% html_text() %>% .[1:10],

           imdb.url=freak %>% html_nodes("td:nth-child(4) a[href*='imdb']") %>% html_attr("href") %>% .[1:10],

           stringsAsFactors=FALSE)

##                                 movie rank        rating                              imdb.url

## 1            Silver Linings Playbook     1 7.4 / trailer  http://www.imdb.com/title/tt1045658/

## 2  The Hobbit: An Unexpected Journey     2 8.2 / trailer  http://www.imdb.com/title/tt0903624/

## 3          Life of Pi (DVDscr/DVDrip)    3 8.3 / trailer  http://www.imdb.com/title/tt0454876/

## 4                       Argo (DVDscr)    4 8.2 / trailer  http://www.imdb.com/title/tt1024648/

## 5                     Identity Thief     5 8.2 / trailer  http://www.imdb.com/title/tt2024432/

## 6                           Red Dawn     6 5.3 / trailer  http://www.imdb.com/title/tt1234719/

## 7      Rise Of The Guardians (DVDscr)    7 7.5 / trailer  http://www.imdb.com/title/tt1446192/

## 8           Django Unchained (DVDscr)    8 8.8 / trailer  http://www.imdb.com/title/tt1853728/

## 9                    Lincoln (DVDscr)    9 8.2 / trailer  http://www.imdb.com/title/tt0443272/

## 10                  Zero Dark Thirty    10 7.6 / trailer http://www.imdb.com/title/tt1790885/?

Source: http://www.r-bloggers.com/migrating-table-oriented-web-scraping-code-to-rvest-wxpath-css-selector-examples/

Tuesday, 9 June 2015

Web Scraping Services : Data Discovery vs. Data Extraction

Looking at screen-scraping at a simplified level, there are two primary stages involved: data discovery and data extraction. Data discovery deals with navigating a web site to arrive at the pages containing the data you want, and data extraction deals with actually pulling that data off of those pages. Generally when people think of screen-scraping they focus on the data extraction portion of the process, but my experience has been that data discovery is often the more difficult of the two.

The data discovery step in screen-scraping might be as simple as requesting a single URL. For example, you might just need to go to the home page of a site and extract out the latest news headlines. On the other side of the spectrum, data discovery may involve logging in to a web site, traversing a series of pages in order to get needed cookies, submitting a POST request on a search form, traversing through search results pages, and finally following all of the "details" links within the search results pages to get to the data you're actually after. In cases of the former a simple Perl script would often work just fine. For anything much more complex than that, though, a commercial screen-scraping tool can be an incredible time-saver. Especially for sites that require logging in, writing code to handle screen-scraping can be a nightmare when it comes to dealing with cookies and such.

In the data extraction phase you've already arrived at the page containing the data you're interested in, and you now need to pull it out of the HTML. Traditionally this has typically involved creating a series of regular expressions that match the pieces of the page you want (e.g., URL's and link titles). Regular expressions can be a bit complex to deal with, so most screen-scraping applications will hide these details from you, even though they may use regular expressions behind the scenes.

As an addendum, I should probably mention a third phase that is often ignored, and that is, what do you do with the data once you've extracted it? Common examples include writing the data to a CSV or XML file, or saving it to a database. In the case of a live web site you might even scrape the information and display it in the user's web browser in real-time. When shopping around for a screen-scraping tool you should make sure that it gives you the flexibility you need to work with the data once it's been extracted.

Source: http://ezinearticles.com/?Data-Discovery-vs.-Data-Extraction&id=165396

Tuesday, 2 June 2015

WordPress Titles: scraping with search url

I’ve blogged for a few years now, and I’ve used several tools along the way. zachbeauvais.com began as a Drupal site, until I worked out that it’s a bit overkill, and switched to WordPress. Recently, I’ve been toying with the idea of using a static site generator (a lá Jekyll or Hyde), or even pulling together a kind of ebook of ramblings. I also want to be able to arrange the posts based on the keywords they contain, regardless of how they’re categorised or tagged.

Whatever I wanted to do, I ended up with a single point of messiness: individual blog posts, and how they’re formatted. When I started, I seem to remember using Drupal’s truly awful WYSIWYG editor, and tweaking the HTML soup it produced. Then, when I moved over to WordPress, it pulled all the posts and metadata through via RSS, and I tweaked with the visual and text tools which are baked into the engine.

A couple years ago, I started to write in Markdown, and completely apart from the blog (thanks to full-screen writing and loud music). This gives me a local .md file, and I copy/paste into WordPress using a plugin to get rid of the visual editor entirely.

So, I wrote a scraper to return a list of blog posts containing a specific term. What I hope is that this very simple scraper is useful to others—WordPress is pretty common, after all—and to get some ideas for improving it, and handle post content. If you haven’t used ScraperWiki before, you might not know that you can see the raw scraper by clicking “view source” from the scraper’s overview page (or going here if you’re lazy).

This scraper is based on WordPress’ built-in search, which can be used by passing the search terms to a url, then scraping the resulting page:

http://zachbeauvais.com/?s=search_term&submit=Search

The scraper uses three Python libraries:

    Requests
    ScraperWiki
    lxml.html

There are two variables which can be changed to search for other terms, or using a different WordPress site:

term = "coffee"

site = "http://www.zachbeauvais.com"

The rest of the script is really simple: it creates a dictionary called “payload” containing the letter “s”, the keyword, and the instruction to search. The “s” is in there to make up the search url: /?s=coffee …

Requests then GETs the site, passing payload as url parameters, and I use Request’s .text function to render the page in html, which I then pass through lxml to the new variable “root”.

payload = {'s': str(term), 'submit': 'Search'}

r = requests.get(site, params=payload)  # This'll be the results page

html = r.text

root = lxml.html.fromstring(html)  # parsing the HTML into the var root

Now, my WordPress theme renders the titles of the retrieved posts in <h1> tags with the CSS class “entry-title”, so I loop through the html text, pulling out the links and text from all the resulting h1.entry-title items. This part of the script would need tweaking, depending on the CSS class and h-tag your theme uses.

for i in root.cssselect("h1.entry-title a"):

    link = i.cssselect("a")

    text = i.text_content()

    data = {

        'uri': link[0].attrib['href'],

        'post-title': str(text),

        'search-term': str(term)

    }

    if i is not None:

        print link

        print text

        print data

        scraperwiki.sqlite.save(unique_keys=['uri'], data=data)

    else:

        print "No results."

These return into an sqlite database via the ScraperWiki library, and I have a resulting database with the title and link to every blog post containing the keyword.

So, this could, in theory, run on any WordPress instance which uses the same search pattern URL—just change the site variable to match.

Also, you can run this again and again, changing the term to any new keyword. These will be stored in the DB with the keyword in its own column to identify what you were looking for.

See? Pretty simple scraping.

So, what I’d like next is to have a local copy of every post in a single format.

Has anyone got any ideas how I could improve this? And, has anyone used WordPress’ JSON API? It might be a logical next step to call the API to get the posts directly from the MySQL DB… but that would be a new blog post!

Source: https://scraperwiki.wordpress.com/2013/03/11/wordpress-titles-scraping-with-search-url/

Friday, 29 May 2015

Data Scraping Services - Web Scraping Video Tutorial Collection for All Programming Language

Web scraping is a mechanism in which request made to website URL to get  HTML Document text and that text then parsed to extract data from the HTML codes.  Website scraping for data is a generalize approach and can be implemented in any programming language like PHP, Java, C#, Python and many other.

There are many Web scraping software available in market using which you can extract data with no coding knowledge. In many case the scraping doesn’t help due to custom crawling flow for data scraping and in that case you have to make your own web scraping application in one of the programming language you know. In this post I have collected scraping video tutorials for all programming language.

I mostly familiar with web scraping using PHP, C# and some other scraping tools and providing web scraping service.  If you have any scraping requirement send me your requirements and I will get back with sample data scrape and best price.

 Web Scraping Using PHP

You can do web scraping in PHP using CURL library and Simple HTML DOM parsing library.  PHP function file_get_content() can also be useful for making web request. One drawback of scraping using PHP is it can’t parse JavaScript so ajax based scraping can’t be possible using PHP.

Web Scraping Using C#

There are many library available in .Net for HTML parsing and data scraping. I have used Web Browser control and HTML Agility Pack for data extraction in .Net using C#

I have didn’t done web scraping in Java, PERL and Python. I had learned web scraping in node.js using Casper.JS and Phantom.JS library. But I thought below tutorial will be helpful for some one who are Java and Python based.

Web Scraping Using Jsoup in Java

Scraping Stock Data Using Python

Develop Web Crawler Using PERL

Web Scraping Using Node.Js

If you find any other good web scraping video tutorial then you can share the link in comment so other readesr get benefit form that.

Source: http://webdata-scraping.com/web-scraping-video-tutorial-collection-programming-language/

Tuesday, 26 May 2015

Web Scraping Services - Extracting Business Data You Need

Would you like to have someone collect, extract, find or scrap contact details, stats, list, extract data, or information from websites, online stores, directories, and more?

"Hi-Tech BPO Services offers 100% risk-free, quick, accurate and affordable web scraping, data scraping, screen scraping, data collection, data extraction, and website scraping services to worldwide organizations ranging from medium-sized business firms to Fortune 500 companies."

At Hi-Tech BPO Services we are helping global businesses build their own database, mailing list, generate leads, and get access to vast resources of unstructured data available on World Wide Web.

We scrape data from various sources such as websites, blogs, podcasts, and online directories; and convert them into structured formats such as excel, csv, access, text, My SQL using automated and manual scraping technologies. Through our web data scraping services, we crawl through websites and gather sales leads, competitor’s product details, new offers, pricing methodologies, and various other types of information from the web.

Our web scraping services scrape data such as name, email, phone number, address, country, state, city, product, and pricing details among others.

Areas of Expertise in Web Scraping:

•    Contact Details
•    Statistics data from websites
•    Classifieds
•    Real estate portals
•    Social networking sites
•    Government portals
•    Entertainment sites
•    Auction portals
•    Business directories
•    Job portals
•    Email ids and Profiles
•    URLs in an excel spreadsheet
•    Market place portals
•    Search engine and SEO
•    Accessories portals
•    News portals
•    Online shopping portals
•    Hotels and restaurant
•    Event portals
•    Lead generation

Industries we Serve:

Our web scraping services are suitable for industries including real estate, information technology, university, hospital, medicine, property, restaurant, hotels, banking, finance, insurance, media/entertainment, automobiles, marketing, human resources, manufacturing, healthcare, academics, travel, telecommunication and many more.

Why Hi-Tech BPO Services for Web Scraping?

•    Skilled and committed scraping experts
•    Accurate solutions
•    Highly cost-effective pricing strategies
•    Presence of satisfied clients worldwide
•    Using latest and effectual web scraping technologies
•    Ensures timely delivery
•    Round the clock customer support and technical assistance

Get Quick Cost and Time Estimate

Source: http://www.hitechbposervices.com/web-scraping.php

Monday, 25 May 2015

Which language is the most flexible for scraping websites?

3 down vote favorite

I'm new to programming. I know a little python and a little objective c, and I've been going through tutorials for each. Then it occurred to me, I need to know which language is more flexible (python, obj c, something else) for screen scraping a website for content.

What do I mean by "flexible"?

Well, ideally, I need something that will be easy to refactor and tweak for similar projects. I'm trying to avoid doing a lot of re-writing (well, re-coding) if I wanted to switch some of the variables in the program (i.e., the website to be scraped, the content to fetch, etc).

Anyways, if you could please give me your opinion, that would be great. Oh, and if you know any existing frameworks for the language you recommend, please share. (I know a little about Selenium and BeautifulSoup for python already).

4 Answers

I recently wrote a relatively complex web scraper to harvest a TON of data. It had to do some relatively complex parsing, I needed it to stuff it into a database, etc. I'm C# programmer now and formerly a Perl guy.

I wrote my original scraper using Python. I started on a Thursday and by Sunday morning I was harvesting over about a million scores from a show horse site. I used Python and SQLlite because they were fast.

HOWEVER, as I started putting together programs to regularly keep the data updated and to populate the SQL Server that would backend my MVC3 application, I kept hitting snags and gaps in my Python knowledge.

In the end, I completely rewrote the scraper/parser in C# using the HtmlAgilityPack and it works better than before (and just about as fast).

Because I KNEW THE LANGUAGE and the environment so much better I was able to add better database support, better logging, better error handling, etc. etc.

So... short answer.. Python was the fastest to market with a "good enough for now" solution, but the language I know best (C#) was the best long-term solution.

EDIT: I used BeautifulSoup for my original crawler written in Python.

5 down vote

The most flexible is the one that you're most familiar with.

Personally, I use Python for almost all of my utilities. For scraping, I find that its functionality specific to parsing and string manipulation requires little code, is fast and there are a ton of examples out there (strong community). Chances are that someone's already written whatever you're trying to do already, or there's at least something along the same lines that needs very little refactoring.

1 down vote

I think its safe to say that Python is a better place to start than Objective C. Honestly, just about any language meets the "flexible" requirement. All you need is well thought out configuration parameters. Also, a dynamic language like Python can go a long way in increasing flexibility, provided that you account for runtime type errors.

1 down vote

I recently wrote a very simple web-scraper; I chose Common Lisp as I'm learning the language.

On the basis of my experience - both of the language and the availability of help from experienced Lispers - I recommend investigating Common Lisp for your purpose.

There are excellent XML-parsing libraries available for CL, as well as libraries for parsing invalid HTML, which you'll need unless the sites you're parsing consist solely of valid XHTML.

Also, Common Lisp is a good language in which to implement DSLs; a DSL for web-scraping may be a solution to your requirement for flexibility & re-use.

Source: http://programmers.stackexchange.com/questions/74998/which-language-is-the-most-flexible-for-scraping-websites/75006#75006


Friday, 22 May 2015

Scraping Data: Site-specific Extractors vs. Generic Extractors

Scraping is becoming a rather mundane job with every other organization getting its feet wet with it for their own data gathering needs. There have been enough number of crawlers built – some open-sourced and others internal to organizations for in-house utilities. Although crawling might seem like a simple technique at the onset, doing this at a large-scale is the real deal. You need to have a distributed stack set up to take care of handling huge volumes of data, to provide data in a low-latency model and also to deal with fail-overs. This still is achievable after crossing the initial tech barrier and via continuous optimizations. (P.S. Not under-estimating this part because it still needs a team of Engineers monitoring the stats and scratching their heads at times).

Social Media Scraping

Focused crawls on a predefined list of sites

However, you bump into a completely new land if your goal is to generate clean and usable data sets from these crawls i.e. “extract” data in a format that your DB can process and aid in generating insights. There are 2 ways of tackling this:

a. site-specific extractors which give desired results

b. generic extractors that result in few surprises

Assuming you still do focused crawls on a predefined list of sites, let’s go over specific scenarios when you have to pick between the two-

1. Mass-scale crawls; high-level meta data – Use generic extractors when you have a large-scale crawling requirement on a continuous basis. Large-scale would mean having to crawl sites in the range of hundreds of thousands. Since the web is a jungle and no two sites share the same template, it would be impossible to write an extractor for each. However, you have to settle in with just the document-level information from such crawls like the URL, meta keywords, blog or news titles, author, date and article content which is still enough information to be happy with if your requirement is analyzing sentiment of the data.

cb1c0_one-size

A generic extractor case

Generic extractors don’t yield accurate results and often mess up the datasets deeming it unusable. Reason being

programatically distinguishing relevant data from irrelevant datasets is a challenge. For example, how would the extractor know to skip pages that have a list of blogs and only extract the ones with the complete article. Or delineating article content from the title on a blog page is not easy either.

To summarize, below is what to expect of a generic extractor.

Pros-

•    minimal manual intervention
•    low on effort and time
•    can work on any scale

Cons-

•    Data quality compromised
•    inaccurate and incomplete datasets
•    lesser details suited only for high-level analyses
•    Suited for gathering- blogs, forums, news
•    Uses- Sentiment Analysis, Brand Monitoring, Competitor Analysis, Social Media Monitoring.

2. Low/Mid scale crawls; detailed datasets – If precise extraction is the mandate, there’s no going away from site-specific extractors. But realistically this is do-able only if your scope of work is limited i.e. few hundred sites or less. Using site-specific extractors, you could extract as many number of fields from any nook or corner of the web pages. Most of the times, most pages on a website share similar templates. If not, they can still be accommodated for using site-specific extractors.

cutlery

Designing extractor for each website

Pros-

•    High data quality
•    Better data coverage on the site

Cons-

High on effort and time

Site structures keep changing from time to time and maintaining these requires a lot of monitoring and manual intervention

Only for limited scale

Suited for gathering – any data from any domain on any site be it product specifications and price details, reviews, blogs, forums, directories, ticket inventories, etc.

Uses- Data Analytics for E-commerce, Business Intelligence, Market Research, Sentiment Analysis

Conclusion

Quite obviously you need both such extractors handy to take care of various use cases. The only way generic extractors can work for detailed datasets is if everyone employs standard data formats on the web (Read our post on standard data formats here). However, given the internet penetration to the masses and the variety of things folks like to do on the web, this is being overly futuristic.

So while site-specific extractors are going to be around for quite some time, the challenge now is to tweak the generic ones to work better. At PromptCloud, we have added ML components to make them smarter and they have been working well for us so far.

What have your challenges been? Do drop in your comments.

Source: https://www.promptcloud.com/blog/scraping-data-site-specific-extractors-vs-generic-extractors/

Wednesday, 20 May 2015

The Features of the "Holographic Meridian Scraping Therapy"

1. Systematic nature: Brief introduction to the knowledge of viscera, meridians and points in traditional Chinese medicine, theory of holographic diagnosis and treatment; preliminary discussion of the treatment and health care mechanism of scraping therapy; systemat­ic introduction to the concrete methods of the holographic meridian scraping therapy; enumerating a host of therapeutic methods of scraping for disorders in both Chinese and Western medicine to em­body a combination of disease differentiation and syndrome differen­tiation; and summarizing the health care scraping methods. It is a practical handbook of gua sha.

2. Scientific: Applying the theories of Chinese and Western medicine to explain the health care and treatment mechanism and clinical applications of scraping therapy; introducing in detail the practical manipulations, items for attention, and indications and contraindications of the scraping therapy. Here are introduced repre­sentative diseases in different clinical departments, for which scrap­ing therapy has a better curative effect and the therapeutic methods of scraping for these diseases. Stress is placed on disease differentia­tion in Western medicine and syndrome differentiation in Chinese medicine, which should be combined in practical application.

Although there are more than 140,000 kinds of disease known to modem medicine, all diseases are related to dysfunction of the 14 meridians and internal organs, according to traditional Chinese med­icine. The object of scraping therapy is to correct the disharmony in the meridians and internal organs to recover the normal bodily func­tions. Thus, the scraping of a set of meridian points can be used to treat many diseases. In the section on clinical application only about 100 kinds of common diseases are discussed, although the actual number is much more than that. For easy reference the "Index of Diseases and Symptoms" is appended at the back of the book.

3. Practical: Using simple language and plenty of pictures and diagrams to guarantee that readers can easily leam, memorize and apply the principles of scraping therapy. As long as they master the methods explained in Chapter Three, readers without any medical knowledge can apply scraping therapy to themselves or others, with reference to the pictures in Chapters Four and Five. Besides scraping therapy, herbal treatment for each disease or syndrome is explained and may be used in combination with the scraping techniques.

Referring to the Holographic Meridian Hand Diagnosis and pic­tures at the back of the book will enhance accuracy of diagnosis and increase the effectiveness of scraping therapy.

Since the first publication and distribution of the Chinese edition of the book in July 1995, it has been welcomed by both medical specialists and lay people. In March 1996 this book was republished and adopted as a textbook by the School for Advanced Studies of Traditional Chinese Medicine affiliated to the Institute of the Acu­puncture and Moxibustion of the China Academy of Traditional Chi­nese Medicine.

In order to bring this health care method to more and more peo­ple and to make traditional Chinese medicine better appreciated They have modified and replenished this book in the spirit of constant im­provement. They hope that they may make a contribution to the health care of mankind with this natural therapy which has no side-effects and causes no pollution.

They hope that the Holographic Meridian Scraping Therapy can help the health and happiness of more and more families in the world.

Source: http://ezinearticles.com/?The-Features-of-the-Holographic-Meridian-Scraping-Therapy&id=5005031

Sunday, 17 May 2015

Dapper: The Scraper for the Common Man

Sometimes, especially with Web 2.0 companies, jargon can get a little bit out of hand. When someone says that a service allows you to "build an API for any website", it can be a bit difficult to understand what that really means.

However, put simply, Dapper is a scraper. Nothing more. It allows you to scrape content from a Web page and convert it into an XML document that can be easily used at another location. Though you won't find the words "scrape" or "scraper" anywhere on its site, that is exactly what it does.

What separates Dapper from other scrapers, both legitimate and illegitimate, is that it is both free and easy to use. In short, it makes the process of setting up the scraper simple enough for your every day Internet user. While one has never needed to be a geek to scrape RSS feeds, now the technologically impaired can scrape content from any site, even those that don't publish RSS feeds.

Though the TechCrunch profile of the service says that Dapper "aims to offer some legitimate, valuable services and set up a means to respect copyright" others are expressing concern about the potential for copyright violations, especially by spam bloggers.

Either way though, both the cause for concern and the potential dangers are very, very real.

What is Dapper

When a user goes to create a new "Dapp", he or she first needs to provide a series of links. These links must be on the same domain and in similar formats (IE: Google searches for different terms or different blog posts on a single site) for the service to work. Once the links have been defined, the user is then taken to a GUI where they pick out fields.

In a simple example where the user would create their own RSS feed for a blog, the post title might be one field, perhaps called "post title" and the body would be a second, perhaps called "post body". Dapper, much like the service social bookmarking Clipmarks, is able able to intelligently select blocks of text on a Web page, making it easy to ensure that the entire post body is selected and that extraneous information is omitted.

Once the fields have been selected, the user can then either create groups based upon those fields or simply save the dapp for future use. Once the Dapp has been saved, they can then use it to create both raw XML data, an RSS feed, a Google Gadget or any number of other output files that can be easily used in other services.

If you are interested in viewing a demo of Dapper, you can do so at this link.

There is little doubt that Dapper is an impressive service. It has taken the black art of scraping and made it into a simple, easy-to-use application that just about anyone can pick up. Though it might take a few tries to create a working Dapp, and certainly spending some time reading up on the service is required, most will find it easy to use, especially when compared to the alternatives.

However, it's this ease of use that has so many worried. Though scrapers have been around for many years, they have been either difficult to use or expensive. Dapper's power, when combined with its price tag and sheer ease of use, has many wondered that it might be ushering not a new age for the Web, but a new age for scrapers seeking to abuse other's hard work.

Cause for Concern

While being easy to use or free is not necessarily a problem in and of itself, in the rush to enable users to make an API for any site, they forget that many sites don't have one or restrict access to their APIs for very good reasons. RSS scraping is perhaps the biggest copyright issue bloggers face. It enables a plagiarist or spammer to not only steal all of the content on the blog right then, but also all of the content that will be posted in the future. This is a huge concern for many bloggers, especially those concerned about performing well in the search engines.

This has prompted many blogs to either disable their RSS feeds, truncate them or move them to a feed monitoring service such as Feedburner. However, if users can simply create their own RSS feeds with ease, these protections are circumvented and Webmasters lose control over their content.

Even with potential copyright abuse issues aside, Dapper creates potential problems for Webmasters. It bypasses the usual metrics that site owners have. A user who reads a site, or large portions of it, through a Dapp will not be counted in either the feed statistics or, depending on how Dapper is set up, even in the site's logs. All the while, the site is spending precious resources to feed the Dapp, taking money out of the Webmaster's pocket.

This combination of greater expense, less traffic and less accurate metrics can be dangerous to Webmasters who are working to get accurate traffic counts, visitor feedback or revenue.

Worse still, Dapp users also bypass any ads or other monetization tools that might be included in the site or the original RSS feed. This has a direct impact on sites trying to either turn a profit or, like this one, recoup some of the costs of hosting.

Despite this, it's the copyright concerns that reign supreme. Though screen scraping is not necessarily an evil technology, it is the sinister uses that have gotten the most attention and, sadly, seem to be the most common, especially in regards to blogs.

Even if the makers of Dapper is aiming to add copyright protection at a later date, the service is fully functional today and, though the FAQ states that they will "comply with any verified request by the lawful owner of the content to cease using his content," there is no opt-out procedure, no DMCA information on the United States Copyright Office Web site, no information on how to prevent Dapper from accessing your site and nothing but a contact page to get in touch with the makers of the service.

(Note: An email sent to the makers of Dapper on the 22nd has, as of yet, gone unanswered)

In addition to creating a potential copyright nightmare for Webmasters the site seems to be setting itself up for a lawsuit. In addition to not being DMCA Safe Harbor compliant (PDF), thus opening it up to copyright infringement lawsuits directly, the service seems to be vulnerable to a lawsuit under the MGM v. Grokster case, which found that service providers can be sued for infringement conducted by its users if they fail an "inducement" test. Sadly for Dapper, simply saying that it is the user's responsibility is not adequate to pass such a test, as Grokster found out. The failure to offer filtering technology and encouragement to create API's for "any" site are both likely strikes against Dapper in that regard.

To make matters more grim, copyright is not the only issue scrapers have to worry about, as one pair of lawyers put it, there are at least four different different legal theories that make scraping illegal including the computer fraud and abuse act, trespass against chattels and breach of contract. All in all, copyright is practically the least of Dapper's problems.

When it's all said and done, there is a lot of room for concern, not just on the part of Webmasters that might be affected by Dapper or its users, but also its makers. These intellectual property and other legal issues could easily sink the entire project.

Conclusions

It is obvious that a lot of time and effort went into creating Dapper. It's a very powerful, easy to use service that opens up interesting possibilities. I would hate to see the service used for ill and I would hate even worse to see all of the hard work that went into it lost because of intellectual property issues.

However, in its current incarnation, it seems likely that Dapper is going to encounter significant resistance on the IP front. There is little, if any protection or regard for intellectual property under the current system and, once bloggers find out that their content is being syndicated without their permission by the service, many are likely to start raising a fuss.

Even though Dapper has gotten rave reviews in the Web 2.0 community, it seems likely that traditional bloggers and other Web site owners will have serious objections to it. Those people, sadly, most likely have never heard of Dapper at this point.

With that being said, it is a service everyone needs to make note of. The one thing that is for certain is that it will be in the news again. The only question is what light will it be under.

Source: https://www.plagiarismtoday.com/2006/08/24/dapper-the-scraper-for-the-common-man/

Wednesday, 6 May 2015

Web Scraping Services Are Important Tools For Knowledge

Data extraction and web scraping techniques are important tools to find relevant data and information for personal or business use. Many companies, self-employed to copy and paste data from web pages. This process is very reliable, but very expensive as it is a waste of time and effort to get results. This is because the data collected and spent less resources and time required to collect these data are compared.

At present, several mining companies and their websites effective web scraping technique specifically for the thousands of pages of information developed culture can be traced. The information from a CSV file, database, XML file, or any other source with the required format is alameda. understanding of correlations and patterns in the data, so that policies can be designed to assist decision making. The information can also be stored for future reference.

The following are some common examples of data extraction process:

In order to rule through a government portal, citizens who are reliable for a given survey name removed.

Competitive pricing and data products include scraping websites

To access the web site or web design Stock download the videos and photos of scratching

Automatic Data Collection

It regularly collects data on a regular basis. Automated data collection techniques are very important because they find the company’s customer trends and market trends to help. By determining market trends, it is possible to understand customer behavior and predict the likelihood of the data will change.

The following are some examples of automated data collection:

Monitoring of special hourly rates for stocks

collects daily mortgage rates from various financial institutions

on a regular basis is necessary to check the weather

By using web scraping services, you can extract all data related to your business. Then analyzed the data to a spreadsheet or database can be downloaded and compared. Storing data in a database or in a required format and interpretation of the correlations to understand and makes it easier to identify hidden patterns.

Data extraction services, it is possible pricing, email, databases, profile data, and consistently to competitors for information about the data. Different techniques and processes designed to collect and analyze data, and has developed over time. Web Scraping for business processes that have beaten the market recently is one. It is a process from various sources such as websites and databases with large amounts of data provides.

Some of the most common methods used to scrape web crawling, text, fun, DOM analysis and include matching expression. After the process is only analyzers, HTML pages or meaning can be achieved through annotations. There are many different ways of scaling data, but more importantly is working toward the same goal. The main purpose of using web scraping service to retrieve and compile data in databases and web sites. In the business world is to remain relevant to the business process.

The central question about the relevance of web scraping contact. The process is relevant to the business world? The answer is yes. The fact that it is used by large companies in the world and many awards speaks derivatives.

Source: http://www.selfgrowth.com/articles/web-scraping-services-are-important-tools-for-knowledge

Thursday, 30 April 2015

Customized Web Data Extraction Solutions for Business

As you begin leading your business on the path to success, competitive analysis forms a major part of your homework. You have already mobilized your efforts in finding the appropriate website data scrapping tool that will help you to collect relevant data from competitive websites and shape them up into useable information. There is however a need to look for a customized approach in your search for Data Extraction tools in order to leverage its benefits in the best possible way.

Off-the-shelf Tools Impede Data Extraction

 In the current scenario, Internet Technologies are evolving in abundance. Every organization leverages this development and builds their websites using a different programming language and technology. Off-the-shelf Website Data extraction tools are unable to interpret this difference. They fail to understand the data elements that need to be captured and end up in gathering data without any change in the software source codes.

As a result of this incapability in their technology, off-the-shelf solutions often deliver unclean, incomplete and also inaccurate data. Developers need to contribute a humungous effort in cleaning up and structuring the data to make it useable. However, despite the time-consuming activity, data seldom metamorphoses into the desired information. Also the personnel dealing with the clean-up process needs to have sufficient technical expertise in order to participate in the activities. The endeavor however results in an impediment to the whole process of data extraction leaving you thirsting for the required information to augment business growth.

Understanding how Web Extraction tools work

Web Scrapping tools are designed to extract data from the web automatically. They are usually small pieces of code written using programming languages such as Python, Ruby or PHP depending upon the expertise of the community building it. There are however several single-click models available which tends to make life easier for non-technical personnel.

The biggest challenge faced by a successful web extractor tool is to know how to tackle the right page and the right elements on that page in order to extract the desired information. Consequently, a web extractor needs to be designed to understand the anatomy of a web page in order to accomplish its task successfully. It should be designed to interpret the meaning of HTML elements like , table rows () within those tables, and table data (<td>) cells within those rows in order to extract the exact data. It will also be interfacing with the

element which are blocks of text and know how to extract the desired information from it.

Customized Solutions for your business

 Customized Solutions are provided by most Data Scraping experts. These software's help to minimize the cumbersome effort of writing elaborate codes to successfully accomplish the feat of data extraction. They are designed to seamlessly search competitive websites,identify relevant data elements, and extract appropriate data that will be useful for your business. Owing to their focused approach, these tools provide clean and accurate data thereby eliminating the need to waste valuable time and effort in any clean-up effort.

Most customized data extraction tools are also capable of delivering the extracted data in customized formats like XML or CSV. It also stores data in local databases like Microsoft Access, MySQL, or Microsoft SQL.

Customized Data scraping solutions therefore help you take accurate and informed decisions in order to define effective business strategies.

Source: http://scraping-solutions.blogspot.in/2014_07_01_archive.html 

Saturday, 25 April 2015

Social Media Crawling & Scraping services for Brand Monitoring

Crawling social media sites for extracting information is a fairly new concept – mainly due to the fact that most of the social media networking sites have cropped up in the last decade or so. But it’s equally (if not more) important to grab this ever-expanding User-Generated-Content (UGC) as this is the data that companies are interested in the most – such as product/service reviews, feedback, complaints, brand monitoring, brand analysis, competitor analysis, overall sentiment towards the brand, and so on.

Scraping social networking sites such as Twitter, Linkedin, Google Plus, Instagram etc. is not an easy task for in-house data acquisition departments of most companies as these sites have complex structures and also restrict the amount and frequency of the data that they let out to crawlers. This kind of a task is best left to an expert, such as PromptCloud’s Social Media Data Acquisition Service – which can take care of your end-to-end requirements and provide you with the desired data in a minimal turnaround time. Most of the popular social networking sites such as Twitter and Facebook let crawlers extract data only through their own API (Application Programming Interface), so as to control the amount of information about their users and their activities.

PromptCloud respects all these restrictions with respect to access to content and frequency of hitting their servers to make sure that user information is not compromised and their experience with the site is unhindered.

Social Media Scraping Experts

At PromptCloud, we have developed an expertise in crawling and scraping social media data in real-time. Such data can be from diverse sources such as – Twitter, Linkedin groups, blogs, news, reviews etc. Popular usage of this data is in brand monitoring, trend watching, sentiment/competitor analysis & customer service, among others.

Our low-latency component can extract data on the basis of specific keywords, categories, geographies, or a combination of these. We can also take care of complexities such as multiple languages as well as tweets and profiles of specific users (based on keywords or geographies). Sample XML data can be accessed through this link – demo.promptcloud.com.

Structured data is delivered via a single REST-based API and every time new content is published, the feed gets updated automatically. We also provide data in any other preferred formats (XML, CSV, XLS etc.).

If you have a social media data acquisition problem that you want to get solved, please do get in touch with us.

Source: https://www.promptcloud.com/social-media-networking-sites-crawling-service/

Wednesday, 22 April 2015

Hand Scraped Flooring For a Natural and Unique Look

An option in hardwood flooring that is being increasingly adopted by those looking for something new, innovative and unique for their homes is hand scraped flooring. This type of wood flooring helps one achieve a distinct natural look on one's floor and also has a couple of advantages.

There are three types of scraping that you can get done on your wooden flooring: light, medium and hard. Preferably, if you have a light colored woodwork, then you should go for light scraping and if your floor has a darker shade, then you should opt for hard scraping. But, irrespective of the type of scraping you go for, you must ensure that the laborers doing the scraping are very skilled and impeccable in their job as hand scraping floors is an art that demands patience, time, talent and hard work.

Nowadays, many people tend to go for machine scraping, attracted by the lower investment involved in it. But such people are unable to achieve the requisite natural effect on their floors as machines create patterns on the floors that are easily detectable. These patterns do not emerge with hand scraping and the consequent look is as random and unique as it gets.

Though such scraped flooring is a costly option in flooring, it demands little maintenance. While with perfectly smooth surfaces, you will be always on the edge ensuring that there are no scratches, with hand scraped floors, you will not have to be concerned about this as any new scratches will only add to the already distressed appearance of the flooring.

Prefinished hand scraped wood flooring is also available in the market nowadays. These eliminate the need of any on-site scraping. But this option is of course unsuitable for those who have already got their floors installed. As it is, if you get on-site scraping done, you will have more control over things as you would be able to see the scraping as it develops and would be therefore in a position to exercise your preferences more.

Source: http://ezinearticles.com/?Hand-Scraped-Flooring-For-a-Natural-and-Unique-Look&id=4581623

Saturday, 18 April 2015

How to Generate Sales Leads Using Web Scraping Services

The first stage of any selling process is what is popularly known as “lead generation”. This phase is what most businesses place at the apex of their sales concerns. It is a driving force that governs decision-making at its highest levels, and influences business strategy and planning. If you are about to embark on an outbound sales campaign and are in the process of looking for leads, you would acknowledge the fact that lead generation process is of extreme importance for any business.

Different lead generation techniques have been used over and over again by companies around the world to satiate this growing business need. Newer, more innovative methods have also emerged to help marketers in this process. One such method of lead generation that is fast catching on, and is poised to play a big role for businesses in the coming years, is web scraping. With web scraping, you can easily get access to multiple relevant and highly customized leads – a perfect starting point for any marketing, promotional or sales campaign.

The prominence of Web Scraping in overall marketing strategy

At present, levels of competition have risen sky high for most businesses. For success, lead generation and gaining insight about customer behavior and preferences is an essential business requirement. Web scraping is the process of scraping or mining the internet for information. Different tools and techniques can be used to harvest information from multiple internet sources based on relevance, and the structured and organized in a way that makes sense to your business. Companies that provide web scraping services essentially use web scrapers to generate a targeted lead database that your company can then integrate into its marketing and sales strategies and plans.

The actual process of web scraping involves creating scraping scripts or algorithms which crawl the web for information based on certain preset parameters and options. The scraping process can be customized and tuned towards finding the kind of data that your business needs. The script can extract data from websites automatically, collate and put together a meaningful collection of leads for business development.

Lead Generation Basics

At a very high level, any person who has the resources and the intent to purchase your product or service qualifies as a lead. In the present scenario, you need to go far deeper than that. Marketers need to observe behavior patterns and purchasing trends to ensure that a particular person qualifies as a lead. If you have a group of people you are targeting, you need to decide who the viable leads will be, acquire their contact information and store it in a database for further action.

List buying used to be a popular way to get leads, but their efficacy has dwindled over time. Web scraping is the fast coming up as a feasible lead generation technique, allowing you to find highly focused and targeted leads in short amounts of time. All you need is a service provider that would carry out the data mining necessary for lead generation, and you end up with a list of actionable leads that you can try selling to.

How Web Scraping makes a substantial difference

With web scraping, you can extract valuable predictive information from websites. Web scraping facilitates high quality data collection and allows you to structure marketing and sales campaigns better. To drive sales and maximize revenue, you need strong, viable leads. To facilitate this, you need critical data which encompasses customer behavior, contact details, buying patterns and trends, willingness and ability to spend resources, and a myriad of other aspects critical to ascertain the potential of an entity as a rewarding lead. Data mining through web scraping can be a great way to get to these factors and identifying the leads that would make a difference for your business.

web-scraping-service

Crawling through many different web locales using different techniques, web scraping services pick up a wealth of information. This highly relevant and specialized information instantly provides your business with actionable leads. Furthermore, this exercise allows you to fine-tune your data management processes, make more accurate and reliable predictions and projections, arrive at more effective, strategic and marketing decisions and customize your workflow and business development to better suit the current market.

The Process and the Tools

Lead generation, being one of the most important processes for any business, can prove to be an expensive proposition if not handled strategically. Companies spend large amounts of their resources acquiring viable leads they can sell to. With web scraping, you can dramatically cut down the costs involved in lead generation and take your business forward with speed and efficiency. Here are some of the time-tested web scraping tools which can come in handy for lead generation –

•    Website download software – Used to copy entire websites to local storage. All website pages are downloaded and the hierarchy of navigation and internal links preserve. The stored pages can then be viewed and scoured for information at any later time.     Web scraper – Tools that crawl through bulk information on the internet, extracting specific, relevant data using a set of pre-defined parameters.

•    Data grabber – Sifts through websites and databases fast and extracts all the information, which can be sorted and classified later.

•    Text extractor – Can be used to scrape multiple websites or locations for acquiring text content from websites and web documents. It can mine data from a variety of text file formats and platforms.

With these tools, web scraping services scrape websites for lead generation and provide your business with a set of strong, actionable leads that can make a difference.

Covering all Bases

The strength of web scraping and web crawling lies in the fact that it covers all the necessary bases when it comes to lead generation. Data is harvested, structured, categorized and organized in such a way that businesses can easily use the data provided for their sales leads. As discussed earlier, cold and detached lists no longer provide you with enough actionable leads. You need to look at various factors and consider them during your lead generation efforts –

•    Contact details of the prospect

•    Purchasing power and purchasing history of the prospect

•    Past purchasing trends, willingness to purchase and history of buying preferences of the prospect

•    Social markers that are indicative of behavioral patterns

•    Commercial and business markers that are indicative of behavioral patterns

•    Transactional details

•    Other factors including age, gender, demography, social circles, language and interests

All these factors need to be taken into account and considered in detail if you have to ensure whether a lead is viable and actionable, or not. With web scraping you can get enough data about every single prospect, connect all the data collected with the help of onboarding, and ascertain with conviction whether a particular prospect will be viable for your business.

Let us take a look at how web scraping addresses these different factors –

1. Scraping website’s


During the scraping process, all websites where a particular prospect has some participation are crawled for data. Seemingly disjointed data can be made into a sensible unit by the use of onboarding- linking user activities with their online entities with the help of user IDs. Documents can be scanned for participation. E-commerce portals can be scanned to find comments and ratings a prospect might have delivered to certain products. Service providers’ websites can be scraped to find if the prospect has given a testimonial to any particular service. All these details can then be accumulated into a meaningful data collection that is indicative of the purchasing power and intent of the prospect, along with important data about buying preferences and tastes.

2. Social scraping

According to a study, most internet users spend upwards of two hours every day on social networks. Therefore, scraping social networks is a great way to explore prospects in detail. Initially, you can get important identification markers like names, addresses, contact numbers and email addresses. Further, social networks can also supply information about age, gender, demography and language choices. From this basic starting point, further details can be added by scraping social activity over long periods of time and looking for activities which indicate purchasing preferences, trends and interests. This exercise provides highly relevant and targeted information about prospects can be constructively used while designing sales campaigns.

Check out How to use Twitter data for your business

3. Transaction scraping

Through the scraping of transactions, you get a clear idea about the purchasing power of prospects. If you are looking for certain income groups or leads that invest in certain market sectors or during certain specific periods of time, transaction scraping is the best way to harvest meaningful information. This also helps you with competition analysis and provides you with pointers to fine-tune your marketing and sales strategies.

get-results-from-your-lead-generation-campaign

Using these varied lead generation techniques and finding the right balance and combination is key to securing the right leads for your business. Overall, signing up for web scraping services can be a make or break factor for your business going forward. With a steady supply of valuable leads, you can supercharge your sales, maximize returns and craft the perfect marketing maneuvers to take your business to an altogether new dimension.

Source: https://www.promptcloud.com/blog/how-to-generate-sales-leads-using-web-scraping-services/

Tuesday, 7 April 2015

rvest: easy web scraping with R

rvest is new package that makes it easy to scrape (or harvest) data from html web pages, inspired by libraries like beautiful soup. It is designed to work with magrittr so that you can express complex operations as elegant pipelines composed of simple, easily understood pieces. Install it with:

install.packages("rvest")

rvest in action

To see rvest in action, imagine we’d like to scrape some information about The Lego Movie from IMDB. We start by downloading and parsing the file with html():

library(rvest)

lego_movie <- html("http://www.imdb.com/title/tt1490017/")

To extract the rating, we start with selectorgadget to figure out which css selector matches the data we want: strong span. (If you haven’t heard of selectorgadget, make sure to read vignette("selectorgadget") – it’s the easiest way to determine which selector extracts the data that you’re interested in.) We use html_node() to find the first node that matches that selector, extract its contents with html_text(), and convert it to numeric with as.numeric():

lego_movie %>%

  html_node("strong span") %>%

  html_text() %>%

  as.numeric()

#> [1] 7.9

We use a similar process to extract the cast, using html_nodes() to find all nodes that match the selector:

lego_movie %>%

  html_nodes("#titleCast .itemprop span") %>%

  html_text()

#>  [1] "Will Arnett"     "Elizabeth Banks" "Craig Berry"   

#>  [4] "Alison Brie"     "David Burrows"   "Anthony Daniels"

#>  [7] "Charlie Day"     "Amanda Farinos"  "Keith Ferguson"

#> [10] "Will Ferrell"    "Will Forte"      "Dave Franco"   

#> [13] "Morgan Freeman"  "Todd Hansen"     "Jonah Hill"

The titles and authors of recent message board postings are stored in a the third table on the page. We can use html_node() and [[ to find it, then coerce it to a data frame with html_table():

lego_movie %>%

  html_nodes("table") %>%

  .[[3]] %>%

  html_table()

#>                                              X 1            NA

#> 1 this movie is very very deep and philosophical   mrdoctor524

#> 2 This got an 8.0 and Wizard of Oz got an 8.1...  marr-justinm

#> 3                         Discouraging Building?       Laestig

#> 4                              LEGO - the plural      neil-476

#> 5                                 Academy Awards   browncoatjw

#> 6                    what was the funniest part? actionjacksin

Other important functions

•    If you prefer, you can use xpath selectors instead of css: html_nodes(doc, xpath = "//table//td")).

•    Extract the tag names with html_tag(), text with html_text(), a single attribute with html_attr() or all attributes with html_attrs().

•    Detect and repair text encoding problems with guess_encoding() and repair_encoding().

•    Navigate around a website as if you’re in a browser with html_session(), jump_to(), follow_link(), back(), and forward(). Extract, modify and submit forms with html_form(), set_values() and submit_form(). (This is still a work in progress, so I’d love your feedback.)

To see these functions in action, check out package demos with demo(package = "rvest").

Source: http://blog.rstudio.org/2014/11/24/rvest-easy-web-scraping-with-r/

Monday, 30 March 2015

How does Web Scraping Identify the Data you Want

The Web is one of the biggest sources of data that should be leveraged for your business. Be it an email, an URL or even a hyperlink text you are looking at, it comprises data that could be translated into useful information for your business. The challenge however lies in identifying the data that is relevant for your needs and enabling access to the required data. Web Scraping tools, however, are geared to help you address this need and leverage the benefit of this huge information repository.

Web Scraping and how it Works?

 Web Scraping is the practice followed to extract data from relevant sources on the Web and transforming them into crucial information packages for use in your business. This is an automated process which is executed with the help of a host of intuitive Web Extraction tools, thus facilitating ease, accuracy and convenience in extracting vital data.

Scrapers also work by writing intelligent pieces of code that scour the web and extract data that you need for the benefit of your business. The languages used for coding these scrapers are Python, Ruby and PHP. The language you use will be determined by the community you have access to.

As mentioned earlier, the biggest challenge that web scraping is subjected to include the identification of the right URL, page and element in order to scrape out the required information. No matter how good you may be at coding scripts, no amount of that will help you achieve your objective if you fail to develop an understanding of the way the web is structured. It is this which will enable you to structure your code in a manner that will be the most effective in scraping the desired information.
  • Understanding a Web Site
  •  A Web Site appears on your browser owing to two technologies. These include:

  •     HTTP – The language used to communicate with the server for requesting the retrieval of resources, namely, images, videos, and documents and so on.
  •     HTML – The language that helps to display the retrieved information on the browser.

The display format of your website is therefore defined using the HTML. It is within the folds of its syntax, that you will find the data which you need to extract. It is, therefore, important that you understand the anatomy of a web site by studying the structure of an HTML Page.

The HTML Page Structure

 An HTML page comprises a stack of elements known as tags, each bearing a specific significance. The first among these being the header tags that comprises mostly all the elements within it. The table element, the most important so far as data containers are concerned, is a crucial element that you need to study. It comprises several table rows (TR) and table data (TD) elements that hold the vital data nuggets that you might need to train your scrapers to extract.

In addition to these, HTML pages comprise a series of other tags that act as vital data holders, namely, image tags (img src), hyperlinks (a href) and the div tags which essentially refer to a block of text.

The scraper code needs to be built around your understanding of the HTML elements. Knowing the elements will help you to understand the specific location where relevant data are stacked. This helps you to correctly define the code so as to enable the scraper to search and extract the right element in order to provide you with the most appropriate information.

We are leading Webdatascraping.us company and enough capable to extract website information, review scraping, contact information scraping, business directory scraping, email list scraping etc.

Friday, 27 March 2015

Scraping expert's Amazon Scraper provides huge access to find your desired product on Amazon

Today, with latest advancement of technology we find plenty of ecommerce websites offering huge benefits to people by giving out various products from different categories at an affordable cost. One of the most renowned ecommerce website Amazon has come up with its all new launch of Amazon Scraper for the comfort of their customers. This product Amazon Scraper is also called web harvesting which is a computer software technique for getting out data from websites.

Today anyone can find such web scraping tools that are specifically designed for particular websites. Like for example, Amazon Scraper is also a web scraper tool or technique utilised to crawl, or scrap or even extract the data from the largest e commerce website called Amazon.com. Scrapingexpert.com offers best Amazon scraper for extracting plenty of products from websites easily.

Amazon scraper

Let us see how the Amazon Scraper works:


How to use: Step 1) Select the Category; Enter the (Keyword, UPC, and ASIN) Step 2) Set the delay in seconds Step 3) Click Start

Also you can Scrape the below given details from Amazon.com:

  •     Product Title & Description
  •     Category & Cost Manufacture,
  •     QTY Seller Name,
  •     Total Sellers Shipping Cost,
  •     Shipping / Product Weight ImageURL, IsBuyBoxFBA, Source Link
  •     Stars, Customer Reviews
  •     ASIN, UPC, Model Number Sales Rank,
  •     Sales Rank In Category

Here are some interesting Product Features:
  •     Single Screen Dashboard that shows total extracted records, extracted keywords, and elapse.
  •     Filter Search - Skip data that do not match phrases or keywords
  •     Compatible for Microsoft XP/Vista/Windows 7
  •     Option to set delay between requests to simulate a human surfing in a browser
  •     Extracted data is stored in CSV format, which you can easily open in excel
  • Benefits:
  •     Less Expensive - With our valuable services, we allow you to save both your efforts and money. We have some competitors who outsource their scraping projects to us.
  •     Guaranteed Accurate Results - We assure you get most reliable solutions with accurate results that cannot be collected by any ordinary human being or anyone else.
  •     Delivers Fast Results - We promise to get your work done in just few hours, which can take plenty of time if done by someone else. We save your time, workforce and money and give you an opportunity to stand at a distinction over your multiple competitors.
  •     System Requirement: Operating System - Windows XP, Windows Vista, Windows 7 Net Framework 2.0

Are you searching for some cost effective programs to extract data of other users? If your answer is yes, then we offer Amazon Screen Scraping which is the best Amazon Screen Scraping method of data extraction. Today, in this competitive world of advanced technology there are multiple companies who claim to offer best Amazon Screen Scraping services, so hiring their services for Amazon Screen Scraping can allow you to scrap almost any data in any format you wish to obtain. Well, we at Scrapingexpert.com study each and every single bit of little details of the scraping project and then provide you with a free quote and the date of completing the work

In order to get accurate data pertaining to a specific product, you can use our Awesome Amazon Scraper Tool. This Awesome Amazon Scraping Tool is very effective tool that will help you to extract information about any product from Amazon.

Websitedatascraping.com is enough capable to web data scraping, website data scraping, web scraping services, website scraping services, data scraping services, product information scraping and yellowpages data scraping.

Tuesday, 24 March 2015

Data Mining Process - Why Outsource Data Mining Service?

Overview of Data Mining and Process:

Data mining is one of the unique techniques for investigating information to extract certain data patterns and decide to outcome of existing requirements. Data mining is widely use in client research, services analysis, market research and so on. It is totally based on mathematical algorithm and analytical skills to drive the desired results from the huge database collection.

Information mining is mostly used by financial analyzer, business and professional organization and also there are many growing area of business that are get maximum advantages of data extract with use of data warehouses in their small to large level of businesses.

Most of functionalities which are used in information collecting process define as under:


* Retrieving Data
* Analyzing Data
* Extracting Data
* Transforming Data
* Loading Data
* Managing Databases

Most of small, medium and large levels of businesses are collect huge amount of data or information for analysis and research to develop business. Such kind of large amount will help and makes it much important whenever information or data required.

Why Outsource Data Online Mining Service?

Outsourcing advantages of data mining services:


o Almost save 60% operating cost

o High quality analysis processes ensuring accuracy levels of almost 99.98%

o Guaranteed risk free outsourcing experience ensured by inflexible information security policies and practices

o Get your project done within a quick turnaround time

o You can measure highly skilled and expertise by taking benefits of Free Trial Program.

o Get the gathered information presented in a simple and easy to access format

Thus, data or information mining is very important part of the web research services and it is most useful process. By outsource data extraction and mining service; you can concentrate on your co relative business and growing fast as you desire.

Outsourcing web research is trusted and well known Internet Market research organization having years of experience in BPO (business process outsourcing) field.

If you want to more information about data mining services and related web research services, then contact us.

Outsourcing Web Research has best infrastructure includes 200+ workstations supported by advanced technologies for operational efficiency and optimum security of your data and information.

Source: http://ezinearticles.com/?Data-Mining-Process---Why-Outsource-Data-Mining-Service?&id=3789102

Tuesday, 17 March 2015

Professional Web Scraping Process

Web scraping is usually regarded as data mining and knowledge discovery. It is the process of extracting useful data and relationships from any data sources. For instance the web pages, databases and search engines. It employs pattern matching and statistical techniques. It is important to note that web scraping does not borrow from other fields like machine learning, databases, data visualization and others but supports such fields.

Web scraping process is such a complex process that requires not only time but also people with expertise in the same field. This is because the internet is such a dynamic resource that changes every time. For instance the data you can extract from a certain website a month ago will not be the same one you will extract now. The changing of data in short period of time poses the difficult of relying to such data and therefore calls for web scraping process. The web scraping process should be performed regularly in order to obtain accurate data that can be relied upon.

It is important to understand that many areas of business, science and other environments use a large amount of data. This data needs to be meaningful and knowledge in its application. Web scraping sometimes may be overlooked, but in essence it can provide very useful information than the statistical methods can produce. The web scraping methods are vital as they give you more control over the data.

Usually the data found on the internet is noisy data. This implies of the advertisements and pop-ups. The data also found on the internet can be described as dynamic data, sparse data, static data, heterogeneity and so and so forth. Such problems occur in very large amounts and therefore call for web scraping professional companies to perform their job. With such problems it is important to realize that statistical methods would never succeed and therefore calls for web scraping.

Process of web scraping

1. Identification of data sources and selection of target data. You need not to harvest any kind of data, but data that is deemed relevant and useful in its application. The relevance can be seen in a way of getting the data that will benefit your company. This is an important step in the web scraping process.

2. Pre-process.This involves cleaning and attributes selection of data before it is being harvested. Web scraping is usually done on specific websites that are relevant to your business. For instance if you have an online store and need information about your competitors products then you need data from other websites that are relevant such e-commerce stores and so on.

3. Web scraping. This involves data mining so as to extract models and information patterns or models that is beneficial to your business.

4. Post-process. After web scraping is done, it is important to identify the useful data that can be used in your business in decision making and so on.

It is important to note that the patterns identified need to be novel, understandable, potentially viable and valid for web scraping process to make sense in business data harvesting.

Source:http://www.loginworks.com/blogs/web-scraping-blogs/professional-web-scraping-process/

Monday, 16 March 2015

6 Benefits Associated with Data Mining

Data has been used from time immemorial by various companies to manage their operations.Data is needed by various organizations strategically aimed at expanding their business operations, reduction of costs, improve their marketing force and above all improve profitability. Data mining is aimed at the creation of information assets and uses them to leverage their objectives.

In this article, we discuss some of the common questions asked about the data mining technology. Some of the questions we have addressed include:

•    How can we define data mining?
•    How can data mining affect my organization?
•    How can my business get started with data mining?

Data Mining Defined

Data mining can be regarded as a new concept in the enterprise decision support system, usually abbreviated as DSS. It does more than complementing and interlocking with the DSS capabilities that may involve reporting and query. It can also be used in on-line analytical processing (OLAP), traditional statistical analysis and data visualization. The technology comes up with tables, graphs and reports of the past business history.

We may define data mining as modeling of hidden patterns and discovering data from large volumes of data.It is important to note that data mining is very different from other retrospective technologies because it involves the creation of models. By using this technology, the user can discover patterns and use them to build models without even understanding what you are after. It gives explanation why the past events happened and even predicting what is likely to happen.

Some of the information technologies that can be linked to data mining include neural networks, fuzzy logic, rule induction and genetic algorithms. In this article we do not cover those technologies but focus on how data mining can be used to meet your business needs and you can translate the solutions thereafter into dollars.

Setting Your Business Solutions and Profits

One of the common questions asked about this technology is; what role can data mining play for my organization? At the start of this article we described some of the opportunities that can be associated with the use of data. Some of those benefits include cost reduction, business expansion, sales and marketing and profitability. In the following paragraphs we look into some of the situations where companies have used data mining to their advantage.

Business Expansion

Equity Financial Limited wanted to expand their customer base and also attract new customers. They used the Loan Check offer to meet their objectives. Initiating the loan, a customer had to go to any branch of Equity branch and just cash the loan. Equity introduced a $6000 LoanCheck by just mailing the promotion to their existing customers. The equity database was able to track about 400 characteristics of every customer. The characteristics were about loan history of the customer, their active credit cards, current balance on the credit cards and if they could respond to the loan offer. Equity used data mining to shift through 400 customer features and also finding the significant ones. They used the data and build model based on the response to the Loan Check offer. They then integrated this model to 500,000 potential customers from credit bureau. They then selectively mailed the most potential customers that were determined by the data mining model.At the end of the process they were able to generate a tot
al of $2.1M in extra net income from 15,000 new customers.

Reduction of Operating Costs

Empire is one of the largest insurance companies in the country. In order to compete with other insurance companies, it has to offer quality services and at the same time reducing costs.Therefore it has to attack costs that may in form of fraud and abuse. This demands a considerable investigation skills and use of data management technology. The latter calls for data mining application that can profile every physician in their network based on claims records of every patient in their data warehouse. The application is able to detect subtle deviations on the physician behavior that are linked to her/her peer group. The deviations are then reported to the intelligence and fraud investigators as “suspicion index.” With this effort derived from data mining, the company was able to save $31M, $37M, and $41M in the first three years respectively from frauds.

Sales Effectiveness and Profitability

In this case we look into pharmaceutical sector. Their sales representatives have wide range of assortment tools they use in promoting various products to physicians. Some of the tools include product samples, clinical literature, dinner meetings, golf outings, teleconferences and many more. Therefore getting to know the promotions methods that are ideal for particular physician is of valuable importance and it is likely to cost the company a lot of dollars in sales call and thereby more lost revenue.

Through data mining, a drug maker was able to link eight months of promotional activity based on corresponding sales found in their database. They then used this information to build a predictive model for each physician.The model revealed that for the six promotional alternatives, only three had a significant impact. Then they used the knowledge found in the data mining models and thereby customizing the ROI.

Looking at those two case studies, then ask yourself, was data mining necessary?

Getting Started

All the cases presented above have revealed how data mining was used to yield results to the various businesses. Some of the results led to increased revenue and increased customer base. Others can be regarded as bottom-line improvements that impacted on cost savings and also improved productivity.In the next few paragraphs we try to answer the question; how can my company get started and start realizing the benefits of data mining.

The right time to start your data mining project is now. With the emergence of specialized data mining companies, starting the process has been simplified and the costs greatly reduced. Data mining project can offer important insights into the field and also aggregate the idea of creating a data warehouse.

In this article we have addressed some of the common questions regarding data mining, what are the benefits associated with the process and how a company can get started. Now, with this knowledge your company should start with a pilot project and then continue building a data mining capability in your company; to improve profitability, market your products more effectively, expand your business and also reduce costs.

Source: http://www.loginworks.com/blogs/web-scraping-blogs/255-benefits-associated-with-data-mining/