Wednesday, 28 December 2016

Data Mining and Financial Data Analysis

Data Mining and Financial Data Analysis

Introduction:

Most marketers understand the value of collecting financial data, but also realize the challenges of leveraging this knowledge to create intelligent, proactive pathways back to the customer. Data mining - technologies and techniques for recognizing and tracking patterns within data - helps businesses sift through layers of seemingly unrelated data for meaningful relationships, where they can anticipate, rather than simply react to, customer needs as well as financial need. In this accessible introduction, we provides a business and technological overview of data mining and outlines how, along with sound business processes and complementary technologies, data mining can reinforce and redefine for financial analysis.

Objective:

1. The main objective of mining techniques is to discuss how customized data mining tools should be developed for financial data analysis.

2. Usage pattern, in terms of the purpose can be categories as per the need for financial analysis.

3. Develop a tool for financial analysis through data mining techniques.

Data mining:

Data mining is the procedure for extracting or mining knowledge for the large quantity of data or we can say data mining is "knowledge mining for data" or also we can say Knowledge Discovery in Database (KDD). Means data mining is : data collection , database creation, data management, data analysis and understanding.

There are some steps in the process of knowledge discovery in database, such as

1. Data cleaning. (To remove nose and inconsistent data)

2. Data integration. (Where multiple data source may be combined.)

3. Data selection. (Where data relevant to the analysis task are retrieved from the database.)

4. Data transformation. (Where data are transformed or consolidated into forms appropriate for mining by performing summary or aggregation operations, for instance)

5. Data mining. (An essential process where intelligent methods are applied in order to extract data patterns.)

6. Pattern evaluation. (To identify the truly interesting patterns representing knowledge based on some interesting measures.)

7. Knowledge presentation.(Where visualization and knowledge representation techniques are used to present the mined knowledge to the user.)

Data Warehouse:

A data warehouse is a repository of information collected from multiple sources, stored under a unified schema and which usually resides at a single site.

Text:

Most of the banks and financial institutions offer a wide verity of banking services such as checking, savings, business and individual customer transactions, credit and investment services like mutual funds etc. Some also offer insurance services and stock investment services.

There are different types of analysis available, but in this case we want to give one analysis known as "Evolution Analysis".

Data evolution analysis is used for the object whose behavior changes over time. Although this may include characterization, discrimination, association, classification, or clustering of time related data, means we can say this evolution analysis is done through the time series data analysis, sequence or periodicity pattern matching and similarity based data analysis.

Data collect from banking and financial sectors are often relatively complete, reliable and high quality, which gives the facility for analysis and data mining. Here we discuss few cases such as,

Eg, 1. Suppose we have stock market data of the last few years available. And we would like to invest in shares of best companies. A data mining study of stock exchange data may identify stock evolution regularities for overall stocks and for the stocks of particular companies. Such regularities may help predict future trends in stock market prices, contributing our decision making regarding stock investments.

Eg, 2. One may like to view the debt and revenue change by month, by region and by other factors along with minimum, maximum, total, average, and other statistical information. Data ware houses, give the facility for comparative analysis and outlier analysis all are play important roles in financial data analysis and mining.

Eg, 3. Loan payment prediction and customer credit analysis are critical to the business of the bank. There are many factors can strongly influence loan payment performance and customer credit rating. Data mining may help identify important factors and eliminate irrelevant one.

Factors related to the risk of loan payments like term of the loan, debt ratio, payment to income ratio, credit history and many more. The banks than decide whose profile shows relatively low risks according to the critical factor analysis.

We can perform the task faster and create a more sophisticated presentation with financial analysis software. These products condense complex data analyses into easy-to-understand graphic presentations. And there's a bonus: Such software can vault our practice to a more advanced business consulting level and help we attract new clients.

To help us find a program that best fits our needs-and our budget-we examined some of the leading packages that represent, by vendors' estimates, more than 90% of the market. Although all the packages are marketed as financial analysis software, they don't all perform every function needed for full-spectrum analyses. It should allow us to provide a unique service to clients.

The Products:

ACCPAC CFO (Comprehensive Financial Optimizer) is designed for small and medium-size enterprises and can help make business-planning decisions by modeling the impact of various options. This is accomplished by demonstrating the what-if outcomes of small changes. A roll forward feature prepares budgets or forecast reports in minutes. The program also generates a financial scorecard of key financial information and indicators.

Customized Financial Analysis by BizBench provides financial benchmarking to determine how a company compares to others in its industry by using the Risk Management Association (RMA) database. It also highlights key ratios that need improvement and year-to-year trend analysis. A unique function, Back Calculation, calculates the profit targets or the appropriate asset base to support existing sales and profitability. Its DuPont Model Analysis demonstrates how each ratio affects return on equity.

Financial Analysis CS reviews and compares a client's financial position with business peers or industry standards. It also can compare multiple locations of a single business to determine which are most profitable. Users who subscribe to the RMA option can integrate with Financial Analysis CS, which then lets them provide aggregated financial indicators of peers or industry standards, showing clients how their businesses compare.

iLumen regularly collects a client's financial information to provide ongoing analysis. It also provides benchmarking information, comparing the client's financial performance with industry peers. The system is Web-based and can monitor a client's performance on a monthly, quarterly and annual basis. The network can upload a trial balance file directly from any accounting software program and provide charts, graphs and ratios that demonstrate a company's performance for the period. Analysis tools are viewed through customized dashboards.

PlanGuru by New Horizon Technologies can generate client-ready integrated balance sheets, income statements and cash-flow statements. The program includes tools for analyzing data, making projections, forecasting and budgeting. It also supports multiple resulting scenarios. The system can calculate up to 21 financial ratios as well as the breakeven point. PlanGuru uses a spreadsheet-style interface and wizards that guide users through data entry. It can import from Excel, QuickBooks, Peachtree and plain text files. It comes in professional and consultant editions. An add-on, called the Business Analyzer, calculates benchmarks.

ProfitCents by Sageworks is Web-based, so it requires no software or updates. It integrates with QuickBooks, CCH, Caseware, Creative Solutions and Best Software applications. It also provides a wide variety of businesses analyses for nonprofits and sole proprietorships. The company offers free consulting, training and customer support. It's also available in Spanish.

ProfitSystem fx Profit Driver by CCH Tax and Accounting provides a wide range of financial diagnostics and analytics. It provides data in spreadsheet form and can calculate benchmarking against industry standards. The program can track up to 40 periods.

Source : http://ezinearticles.com/?Data-Mining-and-Financial-Data-Analysis&id=2752017

Friday, 16 December 2016

Importance of Data Mining Services in Business

Importance of Data Mining Services in Business

Data mining is used in re-establishment of hidden information of the data of the algorithms. It helps to extract the useful information starting from the data, which can be useful to make practical interpretations for the decision making.
It can be technically defined as automated extraction of hidden information of great databases for the predictive analysis. In other words, it is the retrieval of useful information from large masses of data, which is also presented in an analyzed form for specific decision-making. Although data mining is a relatively new term, the technology is not. It is thus also known as Knowledge discovery in databases since it grip searching for implied information in large databases.
It is primarily used today by companies with a strong customer focus - retail, financial, communication and marketing organizations. It is having lot of importance because of its huge applicability. It is being used increasingly in business applications for understanding and then predicting valuable data, like consumer buying actions and buying tendency, profiles of customers, industry analysis, etc. It is used in several applications like market research, consumer behavior, direct marketing, bioinformatics, genetics, text analysis, e-commerce, customer relationship management and financial services.

However, the use of some advanced technologies makes it a decision making tool as well. It is used in market research, industry research and for competitor analysis. It has applications in major industries like direct marketing, e-commerce, customer relationship management, scientific tests, genetics, financial services and utilities.

Data mining consists of major elements:

    Extract and load operation data onto the data store system.
    Store and manage the data in a multidimensional database system.
    Provide data access to business analysts and information technology professionals.
    Analyze the data by application software.
    Present the data in a useful format, such as a graph or table.

The use of data mining in business makes the data more related in application. There are several kinds of data mining: text mining, web mining, relational databases, graphic data mining, audio mining and video mining, which are all used in business intelligence applications. Data mining software is used to analyze consumer data and trends in banking as well as many other industries.

Outsourcing Web Research offer complete Data Mining Services and Solutions to quickly collective data and information from multiple Internet sources for your Business needs in a cost efficient manner.

Sourec : http://ezinearticles.com/?Importance-of-Data-Mining-Services-in-Business&id=2601221

Tuesday, 13 December 2016

Data Extraction Services For Better Outputs in Your Business

Data Extraction Services For Better Outputs in Your Business

Data Extraction can be defined as the process of retrieving data from an unstructured source in order to process it further or store it. It is very useful for large organizations who deal with large amount of data on a daily basis that need to be processed into meaningful information and stored for later use. The data extraction is a systematic way to extract and structure data from scattered and semi-structured electronic documents, as found on the web and in various data warehouses.

In today's highly competitive business world, vital business information such as customer statistics, competitor's operational figures and inter-company sales figures play an important role in making strategic decisions. By signing on this service provider, you will be get access to critivcal data from various sources like websites, databases, images and documents.

It can help you take strategic business decisions that can shape your business' goals. Whether you need customer information, nuggets into your competitor's operations and figure out your organization's performance, it is highly critical to have data at your fingertips as and when you want it. Your company may be crippled with tons of data and it may prove a headache to control and convert the data into useful information. Data extraction services enable you get data quickly and in the right format.

Few areas where Data Extraction can help you are:

    Capturing financial data
    Generating better sales leads
    Conducting market research, survey and analysis
    Conducting product research and analysis
    Track, extract and harvest product pricing data
    Searching for specific job postings
    Duplicating an online database
    Acquiring real estate data
    Processing auction information
    Searching online newspapers for latest pricing information
    Extracting and summarize news stories from online news sources

Outsourcing companies provide custom made data extraction services to the client's requirements. The different types of data extraction services;

    Web extraction
    Database extraction

Outsourcing is the beneficial option for large organizations seeking to manage large information. Outsourcing this services helps businesses in managing their data effectively, which in turn enables business to experience an increase in profits. By outsourcing, you can certainly increase your competitive edge and save costs too!

This article is courtesy of Web Scraping Expert - an executive at Outsourcing Web Research offer high quality and time bound comprehensive range of data extraction services at affordable rates. For more info please visit us at: http://www.webscrapingexpert.com/ or directly send your requirements at: info@webscrapingexpert.com

Source:http://ezinearticles.com/?Data-Extraction-Services-For-Better-Outputs-in-Your-Business&id=2760257

Wednesday, 7 December 2016

Web Data Extraction

Web Data Extraction

The Internet as we know today is a repository of information that can be accessed across geographical societies. In just over two decades, the Web has moved from a university curiosity to a fundamental research, marketing and communications vehicle that impinges upon the everyday life of most people in all over the world. It is accessed by over 16% of the population of the world spanning over 233 countries.

As the amount of information on the Web grows, that information becomes ever harder to keep track of and use. Compounding the matter is this information is spread over billions of Web pages, each with its own independent structure and format. So how do you find the information you're looking for in a useful format - and do it quickly and easily without breaking the bank?

Search Isn't Enough

Search engines are a big help, but they can do only part of the work, and they are hard-pressed to keep up with daily changes. For all the power of Google and its kin, all that search engines can do is locate information and point to it. They go only two or three levels deep into a Web site to find information and then return URLs. Search Engines cannot retrieve information from deep-web, information that is available only after filling in some sort of registration form and logging, and store it in a desirable format. In order to save the information in a desirable format or a particular application, after using the search engine to locate data, you still have to do the following tasks to capture the information you need:

· Scan the content until you find the information.

· Mark the information (usually by highlighting with a mouse).

· Switch to another application (such as a spreadsheet, database or word processor).

· Paste the information into that application.

Its not all copy and paste

Consider the scenario of a company is looking to build up an email marketing list of over 100,000 thousand names and email addresses from a public group. It will take up over 28 man-hours if the person manages to copy and paste the Name and Email in 1 second, translating to over $500 in wages only, not to mention the other costs associated with it. Time involved in copying a record is directly proportion to the number of fields of data that has to copy/pasted.

Is there any Alternative to copy-paste?

A better solution, especially for companies that are aiming to exploit a broad swath of data about markets or competitors available on the Internet, lies with usage of custom Web harvesting software and tools.

Web harvesting software automatically extracts information from the Web and picks up where search engines leave off, doing the work the search engine can't. Extraction tools automate the reading, the copying and pasting necessary to collect information for further use. The software mimics the human interaction with the website and gathers data in a manner as if the website is being browsed. Web Harvesting software only navigate the website to locate, filter and copy the required data at much higher speeds that is humanly possible. Advanced software even able to browse the website and gather data silently without leaving the footprints of access.

Source: http://ezinearticles.com/?Web-Data-Extraction&id=575212

Saturday, 3 December 2016

Data Discovery vs. Data Extraction

Data Discovery vs. Data Extraction

Looking at screen-scraping at a simplified level, there are two primary stages involved: data discovery and data extraction. Data discovery deals with navigating a web site to arrive at the pages containing the data you want, and data extraction deals with actually pulling that data off of those pages. Generally when people think of screen-scraping they focus on the data extraction portion of the process, but my experience has been that data discovery is often the more difficult of the two.

The data discovery step in screen-scraping might be as simple as requesting a single URL. For example, you might just need to go to the home page of a site and extract out the latest news headlines. On the other side of the spectrum, data discovery may involve logging in to a web site, traversing a series of pages in order to get needed cookies, submitting a POST request on a search form, traversing through search results pages, and finally following all of the "details" links within the search results pages to get to the data you're actually after. In cases of the former a simple Perl script would often work just fine. For anything much more complex than that, though, a commercial screen-scraping tool can be an incredible time-saver. Especially for sites that require logging in, writing code to handle screen-scraping can be a nightmare when it comes to dealing with cookies and such.

In the data extraction phase you've already arrived at the page containing the data you're interested in, and you now need to pull it out of the HTML. Traditionally this has typically involved creating a series of regular expressions that match the pieces of the page you want (e.g., URL's and link titles). Regular expressions can be a bit complex to deal with, so most screen-scraping applications will hide these details from you, even though they may use regular expressions behind the scenes.

As an addendum, I should probably mention a third phase that is often ignored, and that is, what do you do with the data once you've extracted it? Common examples include writing the data to a CSV or XML file, or saving it to a database. In the case of a live web site you might even scrape the information and display it in the user's web browser in real-time. When shopping around for a screen-scraping tool you should make sure that it gives you the flexibility you need to work with the data once it's been extracted.

Source: http://ezinearticles.com/?Data-Discovery-vs.-Data-Extraction&id=165396

Monday, 21 November 2016

How Xpath Plays Vital Role In Web Scraping

How Xpath Plays Vital Role In Web Scraping

XPath is a language for finding information in structured documents like XML or HTML. You can say that XPath is (sort of) SQL for XML or HTML files. XPath is used to navigate through elements and attributes in an XML or HTML document.

To understand XPath we must be clear about elements and nodes which are the building blocks of XML and HTML. Let’s talk about them. Here is an example element in an HTML document:

   <a class=”hyperlink” href=http://www.google.com>google</a>

Copy the above text to a file, name it as sample.html and open it in a browser. This will end up as a text link displaying the words “google” and it will take you to www.google.com. For each element there are three main parts: The type, the attributes, andthe text. They are listed below:

 a                                 Type
class,  href                Attributes
google                       Text

Let’s grab some XPath developer tools. I am on Firebug for Firefox or you can use Chrome’s developer tools. We will now form some XPath expressions to extract data from the above element. We will also verify the XPath by using Firebug Console.

For extracting the text “google”:

   //a[@href]/text()   

   //a[@class=”hyperlink”]/text()
 
For extracting the hyperlink i.e. ”www.google.com” :

   //a/@href
//a[@class=”hyperlink”]/@href

That’s all with a single element but in reality, you need to deal with more complex forms.

Let’s proceed to the idea of nodes, and its familial relationship of HTML elements. Look at this example code:

 <div title=”Section1″>

   <table id=”Search”>

       <tr class=”Yahoo”>Yahoo Search</tr>

       <tr class=”Google”>Google Search</tr>

   </table>

</div>

 Notice the </div> at the bottom? That means the table and tr elements are contained within the div. These other elements are considered descendants of the div. The table is a child, and the tr is a grandchild (and so on and so forth). The two tr elements are considered siblings each other. This is vital, as XPath uses these relationships to find your element.

So suppose you want to find the Google item. Any of the following expressions will work:

   //tr[@class=’Google’]
   //div/table/tr[2]
  //div[@title=”Section1″]//tr

So let’s analyze the expressions. We start at the top element (also known as a node). The // means to search all descendants, / means to just look at the current element’s children. So //div means look through all descendants for a div element. The brackets [] specify something about that element. So we can look for an attribute with the @ symbol, or look for text with the text() function. We can chain as many of these together as we can.

Here is a quick reference:

   //             Search all descendant elements
   /              Search all child elements
   []             The predicate (specifies something about the element you are looking for)
   @           Specifies an element attribute. (For example, @title)
   
   .               Specifies the current node (useful when you want to look for an element’s children in the predicate)
   ..              Specifies the parent node
  text()       Gets the text of the element.
   
In the context of web scraping, XPath is a nice tool to have in your belt, as it allows you to write specifications of document locations more flexibly than CSS selectors.

Please subscribe to our blog to get notified when we publish the next blog post.

Source: http://blog.datahut.co/how-xpath-plays-vital-role-in-web-scraping/

Thursday, 3 November 2016

Why Outsourcing Data Mining Services?

Why Outsourcing Data Mining Services?

Are huge volumes of raw data waiting to be converted into information that you can use? Your organization's hunt for valuable information ends with valuable data mining, which can help to bring more accuracy and clarity in decision making process.

Nowadays world is information hungry and with Internet offering flexible communication, there is remarkable flow of data. It is significant to make the data available in a readily workable format where it can be of great help to your business. Then filtered data is of considerable use to the organization and efficient this services to increase profits, smooth work flow and ameliorating overall risks.

Data mining is a process that engages sorting through vast amounts of data and seeking out the pertinent information. Most of the instance data mining is conducted by professional, business organizations and financial analysts, although there are many growing fields that are finding the benefits of using in their business.

Data mining is helpful in every decision to make it quick and feasible. The information obtained by it is used for several applications for decision-making relating to direct marketing, e-commerce, customer relationship management, healthcare, scientific tests, telecommunications, financial services and utilities.

Data mining services include:

  •     Congregation data from websites into excel database
  •     Searching & collecting contact information from websites
  •     Using software to extract data from websites
  •     Extracting and summarizing stories from news sources
  •     Gathering information about competitors business

In this globalization era, handling your important data is becoming a headache for many business verticals. Then outsourcing is profitable option for your business. Since all projects are customized to suit the exact needs of the customer, huge savings in terms of time, money and infrastructure can be realized.

Advantages of Outsourcing Data Mining Services:

  •     Skilled and qualified technical staff who are proficient in English
  •     Improved technology scalability
  •     Advanced infrastructure resources
  •     Quick turnaround time
  •     Cost-effective prices
  •     Secure Network systems to ensure data safety
  •     Increased market coverage

Outsourcing will help you to focus on your core business operations and thus improve overall productivity. So data mining outsourcing is become wise choice for business. Outsourcing of this services helps businesses to manage their data effectively, which in turn enable them to achieve higher profits.

Source: http://ezinearticles.com/?Why-Outsourcing-Data-Mining-Services?&id=3066061

Tuesday, 18 October 2016

Scraping Yelp Data and How to use?

Scraping Yelp Data and How to use?

We get a lot of requests to scrape data from Yelp. These requests come in on a daily basis, sometimes several times a day. At the same time we have not seen a good business case for a commercial project with scraping Yelp.

We have decided to release a simple example Yelp robot which anyone can run on Chrome inside your computer, tune to your own requirements and collect some data. With this robot you can save business contact information like address, postal code, telephone numbers, website addresses etc.  Robot is placed in our Demo space on Web Robots portal for anyone to use, just sign up, find the robot and use it.

How to use it:

    Sign in to our portal here.
    Download our scraping extension from here.
    Find robot named Yelp_us_demo in the dropdown.
    Modify start URL to the first page of your search results. For example: http://www.yelp.com/search?find_desc=Restaurants&find_loc=Arlington,+VA,+USA
    Click Run.
    Let robot finish it’s job and download data from portal.

Some things to consider:

This robot is placed in our Demo space – therefore it is accessible to anyone. Anyone will be able to modify and run it, anyone will be able to download collected data. Robot’s code may be edited by someone else, but you can always restore it from sample code below. Yelp limits number of search results, so do not expect to scrape more results than you would normally see by search.

In case you want to create your own version of such robot, here it’s full code:

// starting URL above must be the first page of search results.
// Example: http://www.yelp.com/search?find_desc=Restaurants&find_loc=Arlington,+VA,+USA

steps.start = function () {

   var rows = [];

   $(".biz-listing-large").each (function (i,v) {
     if ($("h3 a", v).length > 0)
       {
        var row = {};
        row.company = $(".biz-name", v).text().trim();
        row.reviews =$(".review-count", v).text().trim();
        row.companyLink = $(".biz-name", v)[0].href;
        row.location = $(".secondary-attributes address", v).text().trim();
        row.phone = $(".biz-phone", v).text().trim();
        rows.push (row);
      }
   });

   emit ("yelp", rows);
   if ($(".next").length === 1) {
     next ($(".next")[0].href, "start");
   }
 done();
};

Source: https://webrobots.io/scraping-yelp-data/

Wednesday, 28 September 2016

Easy Web Scraping using PHP Simple HTML DOM Parser Library

Easy Web Scraping using PHP Simple HTML DOM Parser Library

Web scraping is only way to get data from website when  website don’t provide API to access it’s data. Web scraping involves following steps to get data:

    Make request to web page
    Parse/Extract data that you want to scrape from website.
    Store data for final output (excel, csv,mysql database etc).

Web scraping can be implemented in any language like PHP, Java, .Net, Python and any language that allows to make web request to get web page content (HTML text) in to variable. In this article I will show you how to use Simple HTML DOM PHP library to do web scraping using PHP.
PHP Simple HTML DOM Parser

Simple HTML DOM is a PHP library to parse data from webpages, in short you can use this library to do web scraping using PHP and even store data to MySQL database.  Simple HTML DOM has following features:

    The parser library is written in PHP 5+
    It requires PHP 5+ to run
    Parser supports invalid HTML parsing.
    It allows to select html tags like Jquery way.
    Supports Xpath and CSS path based web extraction
    Provides both the way – Object oriented way and procedure way to write code

Scrape All Links

<?php
include "simple_html_dom.php";

//create object
$html=new simple_html_dom();

//load specific URL
$html->load_file("http://www.google.com");

// This will Find all links
foreach($html->find('a') as $element)
   echo $element->href . '<br>';

?>

Scrape images

<?php
include "simple_html_dom.php";

//create object
$html=new simple_html_dom();

//load specific url
$html->load_file("http://www.google.com");

// This will Find all links
foreach($html->find('img') as $element)
   echo $element->src . '<br>';

?>

This is just little idea how you can do web scraping using PHP.Keep in mind that Xpath can make your job simple and fast. You can find all methods available in SimpleHTMLDom documentation page.

Source: http://webdata-scraping.com/web-scraping-using-php-simple-html-dom-parser-library/

Monday, 19 September 2016

Powerful Web Scraping Software – Content Grabber Review

Powerful Web Scraping Software – Content Grabber Review

There are many web scraping software and cloud based web scraping services available in the market for extracting data from the websites. They vary widely in cost and features. In this article, I am going to introduce one such advanced web scraping tool “Content Grabber”, which is widely used and the best web scraping software in the market.

Content Grabber is used for web extraction, web scraping and web automation. It can extract content from complex websites and export it as structured data in a variety of formats like Excel Spreadsheets, XML, CSV and databases. Content Grabber can also extract data from highly dynamic websites. It can extract from AJAX-enabled websites, submit forms repeatedly to cover all possible input values, and manage website logins.

Content Grabber is designed to be reliable, scalable and customizable. It is specifically designed for users with a critical reliance on web scraping and web data extraction. It also enables you to make standalone web scraping agents which you can market and sell as your own royalty free web scraping software.

Applications of Content Grabber:

The following are the few applications of Content Grabber:

  •     Data aggregation – for example news aggregation.
  •     Competitive pricing and monitoring e.g. monitor dealers for price compliance.
  •     Financial and Market Research e.g. Make proactive buying and selling decisions by continuously receiving corporate operational data.
  •     Content Integration i.e. integration of data from various sources at one place.
  •     Business Directory Scraping – for example: yellow pages scraping, yelp scraping, superpages scraping etc.
  •     Extracting company data from yellow pages for scraping common data fields like Business Name, Address, Telephone, Fax, Email, Website and Category of Business.
  •     Extracting eBay auction data like: eBay Product Name, Store Information, Buy it Now prices, Product Price, List Price, Seller Price and many more.
  •     Extracting Amazon product data: Information such as Product title, cost, description, details, availability, shipping info, ASIN, rating, rank, etc can be extracted.

Content Grabber Features:

The following section highlights some of the key features of Content Grabber:

1. Point and Click Interface

The Content Grabber editor has an easy to use point and click interface that provides easy point and click configuration. One simply needs to click on web elements to configure website navigation and content capture.

2. Easy to Use

The Content Grabber point and click interface is so simple to use that it can easily be used by beginners and non-programmers. There is certain built in facilities that automatically detect and configure all commands. It will automatically create a list of links, lists of content, manage pagination, handle web pages, download or upload files and capture any action you perform on a web page. You can also manually configure the agent commands, so Content Grabber gives you both simplicity and control.

3. Reliable and Scalable

Content Grabber’s powerful features like testing and debugging, solid error handling and error recovery, allows agent to run in the most difficult scenarios. It easily handles and scrapes dynamic websites built with JavaScript and AJAX. Content Grabber’s Intelligent agents don’t break with most site structure changes. These features enable us to build reliable web scraping agents. There are various configurations and performance tuning options that makes Content Grabber scalable. You can build as many web scraping agents as you want with Content Grabber.

4. High Performance

Multi-threading is used to increase the performance in Content Grabber. Content Grabber uses optimized web browsers. It uses static browsers for static web pages and dynamic browsers for dynamic web pages. It has an ultra-fast HTML5 parser for ultra-fast web scraping. One can use many web browsers concurrently to boost performance.

5. Debugging, Logging and Error Handling

Content Grabber has robust support for debugging, error handling and logging. Using a debugger, you can test and debug the web scraping agents which helps you to build reliable and error free web scraping solutions because most of the issues are addressed at design time. Content Grabber allows agent logging with three detail levels: Log URLs, Log raw HTML, Log to database or file. Logs can be useful to identify problems that occurred during execution of a web scraping agent. Content Grabber supports automatic error handling and custom error handling through scripting. Error status reports can also be mailed to administrators.

6. Scripting

Content Grabber comes with a built in script editor with IntelliSense that one can use in case of some unusual requirements or to fine tune some process. Scripting can be used to control agent behaviour, content transformation, customize data export and delivery and to generate data inputs for agent.

7. Unlimited Web Scraping Agents

Content Grabber allows building an unlimited number of Self-Contained Web Scraping Agents. Self-Contained agents are a standalone executable that can be run independently, branded as your own and distributed royalty free. Content Grabber provides an easy to use and effective GUI to manage all the agents. One can view status and logs of all the agents or run and schedule the agents in one centralized location.

8. Automation

Require data on a schedule? Weekly? Everyday? Each hour? Content Grabber allows automating and publishing extracted data. Configure Content Grabber by telling what data you want once, and then schedule it to run automatically.

And much more

There are too many features that Content Grabber provides, but here are a few more that may be useful and interest you.

  •     Schedule agents
  •     Manage proxies
  •     Custom notification criteria and messages
  •     Email notifications
  •     Handle websites logins
  •     Capture Screenshots of web elements or entire web page or save as PDF.
  •     Capture hidden content on web page.
  •     Crawl entire website
  •     Input data from almost any data source.
  •     Auto scroll to load dynamic data
  •     Handle complex JAVASCRIPT and AJAX actions
  •     XPATH support
  •     Convert Images to Text
  •     CAPTCHA handling
  •     Extract data from non-HTML documents like PDF and Word Documents
  •     Multi-threading and multiple web browsers
  •     Run agent from command line.

The above features come with the Professional edition license. Content Grabber’s Premium edition license is available with the following extra features:

1. Visual Studio 2013 integration

One can integrate Content Grabber to Visual Studio and take advantages of extra powerful script editing, debugging, and unit testing.

2. Remove Content Grabber branding

One can remove Content Grabber branding from the Content Grabber agents and distribute the executable.

3. Custom Design Templates

One can customize the Content Grabber agent user interface design with custom HTML templates – e.g. add your own company branding.

4. Royalty free distribution

One can distribute the Content Grabber agent to anybody without paying royalty fees and can run agents from the command line anywhere.

5. Programming Interface

Programming interfaces like Desktop API, Web API and windows service for building and editing agents.

6. Custom Web Scraping Application Development:

Content Grabber provides API and Visual Studio Integration which developer can use to build custom web scraping applications. It provides full control of the user interface and export functionality. One can develop both Desktop as well as Web based custom web scraping applications using the Content Grabber programming interface. It is a great tool and provides opportunity for developers to build general web scraping applications and sell those to generate revenue.

Are you looking for web scraping services? Do you need any assistance related to Content Grabber? We can probably help you to achieve your scraping-based project goals. We would be more than happy to hear from you.

Source: http://webdata-scraping.com/powerful-web-scraping-software-content-grabber/

Wednesday, 7 September 2016

How to Use Microsoft Excel as a Web Scraping Tool

How to Use Microsoft Excel as a Web Scraping Tool

Microsoft Excel is undoubtedly one of the most powerful tools to manage information in a structured form. The immense popularity of Excel is not without reasons. It is like the Swiss army knife of data with its great features and capabilities. Here is how Excel can be used as a basic web scraping tool to extract web data directly into a worksheet. We will be using Excel web queries to make this happen.

Web queries is a feature of Excel which is basically used to fetch data on a web page into the Excel worksheet easily. It can automatically find tables on the webpage and would let you pick the particular table you need data from. Web queries can also be handy in situations where an ODBC connection is impossible to maintain apart from just extracting data from web pages. Let’s see how web queries work and how you can scrape HTML tables off the web using them.
Getting started

We’ll start with a simple Web query to scrape data from the Yahoo! Finance page. This page is particularly easier to scrape and hence is a good fit for learning the method. The page is also pretty straightforward and doesn’t have important information in the form of links or images. Here is the URL we will be using for the tutorial:

http://finance.yahoo.com/q/hp?s=GOOG

To create a new Web query:

1. Select the cell in which you want the data to appear.
2. Click on Data-> From Web
3. The New Web query box will pop up as shown below.

4. Enter the web page URL you need to extract data from in the Address bar and hit the Go button.
5. Click on the yellow-black buttons next to the table you need to extract data from.

6. After selecting the required tables, click on the Import button and you’re done. Excel will now start downloading the content of the selected tables into your worksheet.

Once you have the data scraped into your Excel worksheet, you can do a host of things like creating charts, sorting, formatting etc. to better understand or present the data in a simpler way.
Customizing the query

Once you have created a web query, you have the option to customize it according to your requirements. To do this, access Web query properties by right clicking on a cell with the extracted data. The page you were querying appears again, click on the Options button to the right of the address bar. A new pop up box will be displayed where you can customize how the web query interacts with the target page. The options here lets you change some of the basic things related to web pages like the formatting and redirections.

Apart from this, you can also alter the data range options by right clicking on a random cell with the query results and selecting Data range properties. The data range properties dialog box will pop up where you can make the required changes. You might want to rename the data range to something you can easily recognize like ‘Stock Prices’.

Auto refresh

Auto-refresh is a feature of web queries worth mentioning, and one which makes our Excel web scraper truly powerful. You can make the extracted data to be auto-refreshing so that your Excel worksheet will update the data whenever the source website changes. You can set how often you need the data to be updated from the source web page in data range options menu. The auto refresh feature can be enabled by ticking the box beside ‘Refresh every’ and setting your preferred time interval for updating the data.
Web scraping at scale

Although extracting data using Excel can be a great way to scrape html tables from the web, it is nowhere close to a real web scraping solution. This can prove to be useful if you are collecting data for your college research paper or you are a hobbyist looking for a cheap way to get your hands on some data. If data for business is your need, you will definitely have to depend on a web scraping provider with expertise in dealing with web scraping at scale. Outsourcing the complicated process that web scraping will also give you more room to deal with other things that need extra attention such as marketing your business.

Source: https://www.promptcloud.com/blog/how-to-use-excel-to-scrape-websites

Monday, 29 August 2016

How Web Scraping can Help you Detect Weak spots in your Business

How Web Scraping can Help you Detect Weak spots in your Business

Business intelligence is not a new term. Businesses have always been employing experts for analysing the progress, market and industry trends to keep their growth graph going up. Now that we have big data and the tool to gather this data – Web scraping, business intelligence has become even more fruitful. In fact, business intelligence has become a necessary thing to survive now that the competition is fierce in every industry. This is the reason why most enterprises depend on web scraping solutions to gather the data relevant to their businesses. This data is highly insightful and dependable enough to make critical business decisions. Business intelligence from web scraping is definitely a game changer for companies as it can supply relevant and actionable data with minimal effort.

Most businesses have weak spots that are being overlooked or hidden from the plain sight. These weak spots, if left unnoticed can gradually result in the downfall of your company. Here is how you can use data acquired through web scraping to detect weak spots in your business and strengthen them.

Competitor analysis

Many a times, you can find out the flaws in your business by keeping a close watch on your competitors. Competitor analysis is something that we owe to web scraping as the level of competitive intelligence that you can derive from web scraping has never been achievable in the past. With crawling forums and social media sites where your target audience is, you can easily find out if your competitor is leveraging something you have overlooked. Competitor analysis is all about staying updated to each and every action by your competitors, so that you can always be prepared for their next strategic move. If your competitors are doing better than you, this data can be used to make a comparison between your business and theirs which would give you insights on where you lack.

Brand monitoring on Social media

With social media platforms acting like platforms where businesses and customers can interact with each other, the data available on these sites are increasingly becoming relevant to businesses. Any issues in your business operations will also reflect on your customer sentiments. Social media is a goldmine of sentiment data that can help you detect issues within your company. By analysing the posts that mention your brand or product on social media sites, you can identify what department of your company is functioning well and what isn’t.

For example, if you are an Ecommerce portal and many users are complaining about delivery issues from your company on social media, you might want to switch to a better logistics partner who does a better job. The ability to identify such issues at the earliest is extremely important and that’s where web scraping becomes a life saver. With social media scraping, monitoring your brand on social media is easy like never before and the chances of minor issues escalating to bigger ones is almost non-existent. Brand monitoring is extremely crucial if you are a business operating in the online space. Social media scraping solutions are provided by many leading web scraping companies, which totally eliminates the technical complications associated with the process for you.

Finding untapped opportunities

There are always new and untapped markets and opportunities that are relevant to your business. Finding them is not going to be an easy task with manual and outdated methods of research. Web scraping can fill this gap and help you find opportunities that your company can make use of to leverage your reach and progress. Sometimes, targeting the right audience makes all the difference that you’ve been trying to make. By using web crawling to find mentions of your relevant keywords on the web, you can easily stay updated on your niche and fill in to any new untapped markets. Web crawling for keywords is better explained in our previous blog.

Bottom line

It is not a cakewalk to stay ahead in the competition considering how competitive every industry has become in this digital age. It is crucial to find the weak spots and untapped opportunities of your business before someone else does. Of course, you can always use some help from the technology when you need it. Web scraping is clearly the best way to find and gather data that would help you figure these out. With web crawling solutions that can completely take care of this niche process, nothing is stopping you from using the data and insights that the web has in stock for your business.

Source: https://www.promptcloud.com/blog/web-scraping-detect-weak-spots-business

Wednesday, 17 August 2016

ERP Data Conversions - Best Practices and Steps

ERP Data Conversions - Best Practices and Steps

Every company who has gone through an ERP project has gone through the painful process of getting the data ready for the new system. The process of executing this typically goes through the following steps:

(1) Extract or define

(2) Clean and transform

(3) Load

(4) Validate and verify

This process is typically executed multiple times (2 - 5+ times depending on complexity) through an ERP project to ensure that the good data ends up in the new system. If the data is either incorrect, not well enough cleaned or adjusted or loaded incorrectly in to the new system it can cause serious problems as the new system is launched.

(1) Extract or define

This involves extracting the data from legacy systems, which are to be decommissioned. In some cases the data may not exist in a legacy system, as the old process may be spreadsheet-based and has to be created from scratch. Typically this involves creating some extraction programs or leveraging existing reports to get the data in to a format which can be put in to a spreadsheet or a data management application.

(2) Data cleansing

Once extracted it normally reviewed is for accuracy by the business, supported by the IT team, and/or adjusted if incorrect or in a structure which the new ERP system does not understand. Depending on the level of change and data quality this can represent a significant effort involving many business stakeholders and required to go through multiple cycles.

(3) Load data to new system

As the data gets structured to a format which the receiving ERP system can handle the load programs may also be build to handle certain changes as part of the process of getting the data converted in to the new system. Data is loaded in to interface tables and loaded in to the new system's core master data and transactions tables.

When loading the data in to the new system the inter-dependency of the different data elements is key to consider and validate the cross dependencies. Exceptions are dealt with and go in to lessons learned and to modify extracts, data cleansing or load process in to the next cycle.

(4) Validate and verify

The final phase of the data conversion process is to verify the converted data through extracts, reports or manually to ensure that all the data went in correctly. This may also include both internal and external audit groups and all the key data owners. Part of the testing will also include attempting to transact using the converted data successfully.

The topmost success factors or best practices to execute a successful conversion I would prioritize as follows:

(1) Start the data conversion early enough by assessing the quality of the data. Starting too late can result in either costly project delays or decisions to load garbage and "deal with it later" resulting in an increase in problems as the new system is launched.

(2) Identify and assign data owners and customers (often forgotten) for the different elements. Ensure that not only the data owners sign-off on the data conversions but that also the key users of the data are involved in reviewing the selection criteria's, data cleansing process and load verification.

(3) Run sufficient enough rounds of testing of the data, including not only validating the loads but also transacting with the converted data.

(4) Depending on the complexity, evaluate possible tools beyond spreadsheets and custom programming to help with the data conversion process for cleansing, transformation and load process.

(5) Don't under-estimate the effort in cleansing and validating the converted data.

(6) Define processes and consider other tools to help how the accuracy of the data will be maintained after the system goes live.

Source: http://ezinearticles.com/?ERP-Data-Conversions---Best-Practices-and-Steps&id=7263314

Monday, 8 August 2016

How to Scrape a Website into Excel without programming

How to Scrape a Website into Excel without programming

This web scraping tutorial will teach you visually step by step how to scrape or extract or pull data from websites using import.io(Free Tool) without programming skills into Excel.

Personally, I use web scraping for analysing my competitors’ best-performing blog posts or content such as what blog posts or content received most comments or social media shares.

In this tutorial,We will scrape the following data from a blog:

    All blog posts URLs.
    Authors names for each post.
    Blog posts titles.
    The number of social media shares each post received.

Then we will use the extracted data to determine what are the popular blog posts and their authors,which posts received much engagement from users through social media shares and on page comments.

Let’s get started.

Step 1:Install import.io app

The first step is to install import.io app.A free web scraping tool and one of the best web scraping software.It is available for Windows,Mac and Linux platforms.Import.io offers advanced data extraction features without coding by allowing you to create custom APIs or crawl entire websites.

After installation, you will need to sign up for an account.It is completely free so don’t worry.I will not cover the installation process.Once everything is set correctly you will see something similar to the window below after your first login.

Step 2:Choose how to scrape data using import.io extractor

With import.io you can do data extraction by creating custom APIs or crawling the entire websites.It comes equipped with different tools for data extraction such as magic,extractor,crawler and connector.

In this tutorial,I will use a tool called “extractor” to create a custom API for our data extraction process.

To get started click the “new” red button on the right top of the page and then click “Start Extractor” button on the pop-up window.

After clicking  “Start Extractor” the Import.io app internal browser window will open as shown below.

Step 3:Data scraping process

Now after the import.io browser is open navigate to the blog URL you want to scrape data from. Then once you already navigated to the target blog URL turn on extraction.In this tutorial,I will use this blog URL bongo5.com  for data extraction.

You can see from the window below I already navigated to www.bongo5.com but extraction switch is still off.

Turn extraction switch “ON” as shown in the window below and move to the next step.

Step 4:Training the “columns” or specifying the data we want to scrape

In this step,I will specify exactly what kind of data I want to scrape from the blog.On import.io app specifying the data you want to scrape is referred to as “training the columns”.Columns represent the data set I want to scrape(post titles,authors’ names and posts URLs).

In order to understand this step, you need to know the difference between a blog page and a blog post.A page might have a single post or multiple posts depending on the blog configuration.

A blog might have several blog posts,even hundreds or thousands of posts.But I will take only one session to train the “extractor” about the data I want to extract.I will do so by using an import.io visual highlighter.Once the data extraction is turned on the-the highlighter will appear by default.

I will do the training session for a single post in a single blog page with multiple posts then the extractor will extract data automatically for the remaining posts on the “same” blog page.
Step 4a:Creating “post_title” column

I will start by renaming “my_column” into the name of the data I want to scrape.Our goal in this tutorial is to scrape the blog posts titles,posts URLs,authors names and get social statistics later so I will create columns for posts titles,posts URLs,authors names.Later on, I will teach you how to get social statistics for the post URLs.

After editing “my_column” into “post_title” then point the mouse cursor over to any of the Posts title on the same blog page and the visual highlighter will automatically appear.Using the highlighter I can select the data I want to extract.

You can see below I selected one of the blog post titles on the page.The rectangular box with orange border is the visual highlighter.

The app will ask you how is the data arranged on the page.Since I have more than one post in a single page then you have rows of repeating data.This blog is having 25 posts per page.So you will select “many rows”.Sometimes you might have a single post on a page for that case you need to select “Just one row”.

Source: http://nocodewebscraping.com/web-scraping-for-dummies-tutorial-with-import-io-without-coding/

Thursday, 4 August 2016

Three Common Methods For Web Data Extraction

Three Common Methods For Web Data Extraction

Probably the most common technique used traditionally to extract data from web pages this is to cook up some regular expressions that match the pieces you want (e.g., URL's and link titles). Our screen-scraper software actually started out as an application written in Perl for this very reason. In addition to regular expressions, you might also use some code written in something like Java or Active Server Pages to parse out larger chunks of text. Using raw regular expressions to pull out the data can be a little intimidating to the uninitiated, and can get a bit messy when a script contains a lot of them. At the same time, if you're already familiar with regular expressions, and your scraping project is relatively small, they can be a great solution.

Other techniques for getting the data out can get very sophisticated as algorithms that make use of artificial intelligence and such are applied to the page. Some programs will actually analyze the semantic content of an HTML page, then intelligently pull out the pieces that are of interest. Still other approaches deal with developing "ontologies", or hierarchical vocabularies intended to represent the content domain.

There are a number of companies (including our own) that offer commercial applications specifically intended to do screen-scraping. The applications vary quite a bit, but for medium to large-sized projects they're often a good solution. Each one will have its own learning curve, so you should plan on taking time to learn the ins and outs of a new application. Especially if you plan on doing a fair amount of screen-scraping it's probably a good idea to at least shop around for a screen-scraping application, as it will likely save you time and money in the long run.

So what's the best approach to data extraction? It really depends on what your needs are, and what resources you have at your disposal. Here are some of the pros and cons of the various approaches, as well as suggestions on when you might use each one:

Raw regular expressions and code

Advantages:

- If you're already familiar with regular expressions and at least one programming language, this can be a quick solution.

- Regular expressions allow for a fair amount of "fuzziness" in the matching such that minor changes to the content won't break them.

- You likely don't need to learn any new languages or tools (again, assuming you're already familiar with regular expressions and a programming language).

- Regular expressions are supported in almost all modern programming languages. Heck, even VBScript has a regular expression engine. It's also nice because the various regular expression implementations don't vary too significantly in their syntax.

Disadvantages:

- They can be complex for those that don't have a lot of experience with them. Learning regular expressions isn't like going from Perl to Java. It's more like going from Perl to XSLT, where you have to wrap your mind around a completely different way of viewing the problem.

- They're often confusing to analyze. Take a look through some of the regular expressions people have created to match something as simple as an email address and you'll see what I mean.

- If the content you're trying to match changes (e.g., they change the web page by adding a new "font" tag) you'll likely need to update your regular expressions to account for the change.

- The data discovery portion of the process (traversing various web pages to get to the page containing the data you want) will still need to be handled, and can get fairly complex if you need to deal with cookies and such.

When to use this approach: You'll most likely use straight regular expressions in screen-scraping when you have a small job you want to get done quickly. Especially if you already know regular expressions, there's no sense in getting into other tools if all you need to do is pull some news headlines off of a site.

Ontologies and artificial intelligence

Advantages:

- You create it once and it can more or less extract the data from any page within the content domain you're targeting.

- The data model is generally built in. For example, if you're extracting data about cars from web sites the extraction engine already knows what the make, model, and price are, so it can easily map them to existing data structures (e.g., insert the data into the correct locations in your database).

- There is relatively little long-term maintenance required. As web sites change you likely will need to do very little to your extraction engine in order to account for the changes.

Disadvantages:

- It's relatively complex to create and work with such an engine. The level of expertise required to even understand an extraction engine that uses artificial intelligence and ontologies is much higher than what is required to deal with regular expressions.

- These types of engines are expensive to build. There are commercial offerings that will give you the basis for doing this type of data extraction, but you still need to configure them to work with the specific content domain you're targeting.

- You still have to deal with the data discovery portion of the process, which may not fit as well with this approach (meaning you may have to create an entirely separate engine to handle data discovery). Data discovery is the process of crawling web sites such that you arrive at the pages where you want to extract data.

When to use this approach: Typically you'll only get into ontologies and artificial intelligence when you're planning on extracting information from a very large number of sources. It also makes sense to do this when the data you're trying to extract is in a very unstructured format (e.g., newspaper classified ads). In cases where the data is very structured (meaning there are clear labels identifying the various data fields), it may make more sense to go with regular expressions or a screen-scraping application.

Screen-scraping software

Advantages:

- Abstracts most of the complicated stuff away. You can do some pretty sophisticated things in most screen-scraping applications without knowing anything about regular expressions, HTTP, or cookies.

- Dramatically reduces the amount of time required to set up a site to be scraped. Once you learn a particular screen-scraping application the amount of time it requires to scrape sites vs. other methods is significantly lowered.

- Support from a commercial company. If you run into trouble while using a commercial screen-scraping application, chances are there are support forums and help lines where you can get assistance.

Disadvantages:

- The learning curve. Each screen-scraping application has its own way of going about things. This may imply learning a new scripting language in addition to familiarizing yourself with how the core application works.

- A potential cost. Most ready-to-go screen-scraping applications are commercial, so you'll likely be paying in dollars as well as time for this solution.

- A proprietary approach. Any time you use a proprietary application to solve a computing problem (and proprietary is obviously a matter of degree) you're locking yourself into using that approach. This may or may not be a big deal, but you should at least consider how well the application you're using will integrate with other software applications you currently have. For example, once the screen-scraping application has extracted the data how easy is it for you to get to that data from your own code?

When to use this approach: Screen-scraping applications vary widely in their ease-of-use, price, and suitability to tackle a broad range of scenarios. Chances are, though, that if you don't mind paying a bit, you can save yourself a significant amount of time by using one. If you're doing a quick scrape of a single page you can use just about any language with regular expressions. If you want to extract data from hundreds of web sites that are all formatted differently you're probably better off investing in a complex system that uses ontologies and/or artificial intelligence. For just about everything else, though, you may want to consider investing in an application specifically designed for screen-scraping.

As an aside, I thought I should also mention a recent project we've been involved with that has actually required a hybrid approach of two of the aforementioned methods. We're currently working on a project that deals with extracting newspaper classified ads. The data in classifieds is about as unstructured as you can get. For example, in a real estate ad the term "number of bedrooms" can be written about 25 different ways. The data extraction portion of the process is one that lends itself well to an ontologies-based approach, which is what we've done. However, we still had to handle the data discovery portion. We decided to use screen-scraper for that, and it's handling it just great. The basic process is that screen-scraper traverses the various pages of the site, pulling out raw chunks of data that constitute the classified ads. These ads then get passed to code we've written that uses ontologies in order to extract out the individual pieces we're after. Once the data has been extracted we then insert it into a database.

Source: http://ezinearticles.com/?Three-Common-Methods-For-Web-Data-Extraction&id=165416

Sunday, 31 July 2016

Scraping LinkedIn Public Profiles for Fun and Profit

Scraping LinkedIn Public Profiles for Fun and Profit

Reconnaissance and Information Gathering is a part of almost every penetration testing engagement. Often, the tester will only perform network reconnaissance in an attempt to disclose and learn the company's network infrastructure (i.e. IP addresses, domain names, and etc), but there are other types of reconnaissance to conduct, and no, I'm not talking about dumpster diving. Thanks to social networks like LinkedIn, OSINT/WEBINT is now yielding more information. This information can then be used to help the tester test anything from social engineering to weak passwords.

In this blog post I will show you how to use Pythonect to easily generate potential passwords from LinkedIn public profiles. If you haven't heard about Pythonect yet, it is a new, experimental, general-purpose dataflow programming language based on the Python programming language. Pythonect is most suitable for creating applications that are themselves focused on the "flow" of the data. An application that generates passwords from the employees public LinkedIn profiles of a given company - have a coherence and clear dataflow:

(1) Find all the employees public LinkedIn profiles → (2) Scrap all the employees public LinkedIn profiles → (3) Crunch all the data into potential passwords

Now that we have the general concept and high-level overview out of the way, let's dive in to the details.

Finding all the employees public LinkedIn profiles will be done via Google Custom Search Engine, a free service by Google that allows anyone to create their own search engine by themselves. The idea is to create a search engine that when searching for a given company name - will return all the employees public LinkedIn profiles. How? When creating a Google Custom Search Engine it's possible to refine the search results to a specific site (i.e. 'Sites to search'), and we're going to limit ours to: linkedin.com. It's also possible to fine-tune the search results even further, e.g. uk.linkedin.com to find only employees from United Kingdom.

The access to the newly created Google Custom Search Engine will be made using a free API key obtained from Google API Console. Why go through the Google API? because it allows automation (No CAPTCHA's), and it also means that the search-result pages will be returned as JSON (as oppose to HTML). The only catch with using the free API key is that it's limited to 100 queries per day, but it's possible to buy an API key that will not be limited.

Scraping the profiles is a matter of iterating all over the hCards in all the search-result pages, and extracting the employee name from each hCard. Whats is a hCard? hCard is a micro format for publishing the contact details of people, companies, organizations, and places. hCard is also supported by social networks such as Facebook, Google+, LinkedIn and etc. for exporting public profiles. Google (when indexing) parses hCard, and when relevant, uses them in search-result pages. In other words, when search-result pages include LinkedIn public profiles, it will appear as hCards, and could be easily parsed.

Let's see the implementation of the above:

#!/usr/bin/python
#
# Copyright (C) 2012 Itzik Kotler
#
# scraper.py is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# scraper.py is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with scraper.py.  If not, see <http://www.gnu.org/licenses/>.

"""Simple LinkedIn public profiles scraper that uses Google Custom Search"""

import urllib
import simplejson


BASE_URL = "https://www.googleapis.com/customsearch/v1?key=<YOUR GOOGLE API KEY>&cx=<YOUR GOOGLE SEARCH ENGINE CX>"


def __get_all_hcards_from_query(query, index=0, hcards={}):

    url = query

    if index != 0:

        url = url + '&start=%d' % (index)

    json = simplejson.loads(urllib.urlopen(url).read())

    if json.has_key('error'):

        print "Stopping at %s due to Error!" % (url)

        print json

    else:

        for item in json['items']:

            try:

                hcards[item['pagemap']['hcard'][0]['fn']] = item['pagemap']['hcard'][0]['title']

            except KeyError as e:

                pass

        if json['queries'].has_key('nextPage'):

            return __get_all_hcards_from_query(query, json['queries']['nextPage'][0]['startIndex'], hcards)

    return hcards


def get_all_employees_by_company_via_linkedin(company):

    queries = ['"at %s" inurl:"in"', '"at %s" inurl:"pub"']

    result = {}

    for query in queries:

        _query = query % company

        result.update(__get_all_hcards_from_query(BASE_URL + '&q=' + _query))

    return list(result)

Replace <YOUR GOOGLE API KEY> and <YOUR GOOGLE SEARCH ENGINE CX> in the code above with your Google API Key and Google Search Engine CX respectively, save it to a file called scraper.py, and you're ready!

To kick-start, here is a simple program in Pythonect (that utilizes the scraper module) that searchs and prints all the Pythonect company employees full names:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> print

The output should be:

Itzik Kotler

In my LinkedIn Profile, I have listed Pythonect as a company that I work for, and since no one else is working there, when searching for all the employees of Pythonect company - only my LinkedIn profile comes up.
For demonstration purposes I will keep using this example (i.e. "Pythonect" company, and "Itzik Kotler" employee), but go ahead and replace Pythonect with other, more popular, companies names and see the results.

Now that we have a working skeleton, let's take its output and start crunching it. Keep in mind that every "password generation forumla" is merely a guess. The examples below are only a sampling of what can be done. There are, obviously many more possibilities and you are encouraged to experiment. But first, let's normalize the output - this way it's going to be consistent before operations are performed on it:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> string.lower(''.join(_.split()))

The normalization procedure is short and simple: convert the string to lowercase and remove any spaces, and so the output should be now:

itzikkotler

As for data manipulation, out of the box (Thanks to The Python Standard Library) we've got itertools and it's combinatoric generators. Let's start by applying itertools.product:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> string.lower(''.join(_.split())) -> itertools.product(_, repeat=4) -> print

The code above will generate and print every 4 characters password from the letters: i, t, z, k, o, t, l , e, r. However, it won't cover passwords with uppercase letters in it. And so, here's a simple and straightforward implementation of a cycle_uppercase function that cycles the input letters yields a copy of the input with letter in uppercase:

def cycle_uppercase(i):
    s = ''.join(i)
    for idx in xrange(0, len(s)):
        yield s[:idx] + s[idx].upper() + s[idx+1:]

To use it, save it to a file called itertools2.py, and then simply add it to the Pythonect program after the itertools.product(_, repeat=4) block, as follows:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin \
    -> string.lower(''.join(_.split())) \
        -> itertools.product(_, repeat=4) \
            -> itertools2.cycle_uppercase \
                -> print

Now, the program will also cover passwords that include a single uppercase letter in it. Moving on with the data manipulation, sometimes the password might contain symbols that are not found within the scrapped data. In this case, it is necessary to build a generator that will take the input and add symbols to it. Here is a short and simple generator implemented as a Generator Expression:

[_ + postfix for postfix in ['123','!','$']]

To use it, simply add it to the Pythonect program after the itertools2.cycle_uppercase block, as follows:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin \
    -> string.lower(''.join(_.split())) \
        -> itertools.product(_, repeat=4) \
            -> itertools2.cycle_uppercase \
                -> [_ + postfix for postfix in ['123','!','$']] \
                    -> print

The result is that now the program adds the strings: '123', '!', and '$' to every generated password, which increases the chances of guessing the user's right password, or not, depends on the password :)

To summarize, it's possible to take OSINT/WEBINT data on a given person or company and use it to generate potential passwords, and it's easy to do with Pythonect. There are, of course, many different ways to manipulate the data into passwords and many programs and filters that can be used. In this aspect, Pythonect being a flow-oriented language makes it easy to experiment and research with different modules and programs in a "plug and play" manner.

Source:http://blog.ikotler.org/2012/12/scraping-linkedin-public-profiles-for.html

Monday, 11 July 2016

Extract Data from Multiple Web Pages into Excel using import.io

In this tutorial, i will show you how to extract data from multiple web pages of a website or blog and save the extracted data into Excel spreadsheet for further processing.There are various methods and tools to do that but I found them complicated and I prefer to use import.io to accomplish the task.Import.io doesn’t require you to have programming skills.The platform is quite powerful,user-friendly with a lot of support online and above all FREE to use.

You can use the online version of their data extraction software or a desktop application.The online version will be covered in this tutorial.

Let us get started.

Step 1:Find a web page you want to extract data from.
You can extract data such as prices, images, authors’ names, addresses,dates etc

Step 2:Enter the URL for that web page into the text box here and click “Extract data”.

Then click  “Extract data” Import.io will transform the web page into data in seconds.Data such as authors,images,posts published dates and posts title will be pulled from the web page as shown in the image below.

Import.io extracted only 40 posts or articles from the first page of the blog!.
If you visit bongo5.com you will notice that the web page is having a total of 600+ pages at the time of writing this article and each page has 40 posts or articles on it as can be shown by the image below.
Next step will show you how to extract data from multiple pages of the web page into excel.

Step 3:Extract Data from Multiple Web Pages into Excel

Using the import.io online tool you can extract data from 20 web pages maximum.Go to the bottom right corner of the import.io online tool page and click “Download CSV” to save the extracted data from those 20 pages into Excel.
Note:Using the import.io desktop application you can extract an unlimited number of pages and pin point only the data you want to extract.Check out this tutorial on how to use the desktop application.
Once you click “Download CSV” the following pop up window will appear.You can specify the number of pages you want to get data from up to a maximum of 20 pages then click “Go!”
You will need to Sign up for a free account to download that data as a CSV, or save it as an API.If you save it as an API you can go back to the API later to extract new data if the web page is updated without the need to repeat the steps we have done so far.Also, you can use the API for integration into other platforms.
Below image shows 20 rows out of 800 rows of data extracted from the 20 pages of the web page.

Conclusion

The online tool doesn’t offer much flexibility than the desktop application.For example, you can not extract more than 20 pages and you can not pin point the type of data you want to extract.For a more advanced tutorial on how to use the desktop application, you can check out this tutorial I created earlier.

Source URL : http://nocodewebscraping.com/extract-multiple-web-pages-data-into-excel/

Sunday, 10 July 2016

4 Web Scraping Tools To Save You Time On Data Extraction

Either you are working on a product website, struggling to add live data feed to your app or merely need to pull out a huge amount of online data for analysis, an accurate web scraping tool can save you loads of time and keep you sane. Here are four powerful web scraping tools to save you from copy-pasting or spending time on writing your own scripts.

Uipath  specializes in developing various process automation software including web scraping and screen scraping software for desktop and web. Uipath web scraper is perfect for non-coders and easily surpasses most common data extraction challenges including page navigation, digging through flash and even scraping PDF files. All you need to do is open the web scraping wizard and simply highlight the data you need to extract. The tool will scrape all the data following this pattern at all pages you’ve chosen and sort it accordingly. You can add as many items for scraping as you like and have them sorted in respective columns. As a result, you receive a neat Excel or CSV document with all the data eliminated from duplicates.

Moreover, Uipath isn’t just about scraping. This software can be used not only for extracting data, but to manipulate the interface of another app, thus establishing data transfers among the two of them. Basically, this tool could be used to conduct any repetitive task a human could do, yet much faster and with higher accuracy.

Pros: You can automate form filling, clicking buttons, navigation etc. Uipath scraper is impressively accurate, fast and simple to use. It “reads” all types of data on screen (JS, HTML, Silverlight and more), plus you can train the software to emulate human actions of various complexity.

Cons: Premium software runs at a premium price. Uipath is an affordable professional solution, but may be a bit too pricey for personal use.

 Import.io  offers you a free desktop app to help you scrap all the data you need from an unlimited amount of web pages. The service treats each page as a potential data source to generate API from. If the page you’ve submitted has been previously processed, you can access its API and get some of the data. In other case, Import.io will guide you through the process of creating the scraping matrix by building connectors (for navigation) or extractors (to pull out the needed data). Afterwards, you submit a request for extraction and it’s typically processed within 24 hours. All the data is private and you can schedule auto refreshments at any chosen period of time.

Pros: The service is easy-to-use with no tech skills needed. It can  pages with data (those that needed login/pass), plus it’s free. Minimalistic effective design and simple navigation comes along.

Cons: Improt.io has hard times navigating through combinations of javascript/POST and cannot navigate from one page to another (e.g. click next, second page etc).  Sometimes, it takes over 24 hours to receive the report.  Besides, it’s a browser-only app, non-compatible with other applications.

Kimono is a popular web scraper among app developers who prefer to power up their products with live data and no additional code. It saves you tons of time when you need to fill up your app with mashing data. Install Kimono Browser bookmarklet; highlight page elements you need to and provide some positive/negative examples to train the tool. After labeling all the data you can download it in CSV/JSON/a web endpoint format. The APIs created for your pages are stored in the cloud and you can run them on schedule. So far, Kimono is free to use with pro and enterprise solutions to be launched soon.

Pros: The tool works pretty fast and works great with scraping newsfeeds and prices. The data is rather accurate.

Cons: No page navigation available and you need to spend quite a lot of time to train Kimono before it starts to pull out the multi items data accurate enough. In general, I’d say Kimono is more of an app mash-ups creator than a full-scale web scraper.

 Screen Scraper  is pretty neat and tackles a lot of difficult tasks including navigation and precise data extractions, however it requires a bit of programming/tokenization skills if you’d like to run it super smooth. Launch the software, add a proxy, start recording the list of your actions and creating extracting patterns (some coding required). Works great with HTML and Javascript, however you should test it with Citrix and other platforms. Basically, screen scraper helps you writing simple web scraping scripts and lets you download the extracted data in txt/csv/excel format.

Pros: When set correctly, there’s no data extraction tasks Screen scraper fails to handle.
Cons: The tool is pricey and you’ll have to go through documentation and have basic coding skills to use it.

Source URL :  http://tech.co/4-web-scraping-tools-save-time-data-extraction-2015-03

Friday, 8 July 2016

ECJ clarifies Database Directive scope in screen scraping case

EC on the legal protection of databases (Database Directive) in a case concerning the extraction of data from a third party’s website by means of automated systems or software for commercial purposes (so called 'screen scraping').

Flight data extracted

The case, Ryanair Ltd vs. PR Aviation BV, C-30/14, is of interest to a range of companies such as price comparison websites. It stemmed from  Dutch company PR Aviation operation of a website where consumers can search through flight data of low-cost airlines  (including Ryanair), compare prices and, on payment of a commission, book a flight. The relevant flight data is extracted from third-parties’ websites by means of ‘screen scraping’ practices.

Ryanair claimed that PR Aviation’s activity:

• amounted to infringement of copyright (relating to the structure and architecture of the database) and of the so-called sui generis database right (i.e. the right granted to the ‘maker’ of the database where certain investments have been made to obtain, verify, or present the contents of a database) under the Netherlands law implementing the Database Directive;

• constituted breach of contract. In this respect, Ryanair claimed that a contract existed with PR Aviation for the use of its website. Access to the latter requires acceptance, by clicking a box, of the airline’s general terms and conditions which, amongst others, prohibit unauthorized ‘screen scraping’ practices for commercial purposes.

Ryanair asked Dutch courts to prohibit the infringement and order damages. In recent years the company has been engaged in several legal cases against web scrapers across Europe.

The Local Court, Utrecht, and the Court of Appeals of Amsterdam dismissed Ryanair’s claims on different grounds. The Court of Appeals, in particular, cited PR Aviation’s screen scraping of Ryanair’s website as amounting to a “normal use” of said website within the meaning of the lawful user exceptions under Sections 6 and 8 of the Database Directive, which cannot be derogated by contract (Section 15).

Ryanair appealed

Ryanair appealed the decision before the Netherlands Supreme Court (Hoge Raad der Nederlanden), which decided to refer the following question to the ECJ for a preliminary ruling: “Does the application of [Directive 96/9] also extend to online databases which are not protected by copyright on the basis of Chapter II of said directive or by a sui generis right on the basis of Chapter III, in the sense that the freedom to use such databases through the (whether or not analogous) application of Article[s] 6(1) and 8, in conjunction with Article 15 [of Directive 96/9] may not be limited contractually?.”

The ECJ’s ruling

The ECJ (without the need of the opinion of the advocate general) ruled that the Database Directive is not applicable to databases which are not protected either by copyright or by the sui generis database right. Therefore, exceptions to restricted acts set forth by Sections 6 and 8 of the Directive do not prevent the database owner from establishing contractual limitations on its use by third parties. In other words, restrictions to the freedom to contract set forth by the Database Directive do not apply in cases of unprotected databases. Whether Ryanair’s website may be entitled to copyright or sui generis database right protection needs to be determined by the competent national court.

The ECJ’s decision is not particularly striking from a legal standpoint. Yet, it could have a significant impact on the business model of price comparison websites, aggregators, and similar businesses. Owners of databases that could not rely on intellectual property protection may contractually prevent extraction and use (“scraping”) of content from their online databases. Thus, unprotected databases could receive greater protection than the one granted by IP law.

Antitrust implications

However, the lawfulness of contractual restrictions prohibiting access and reuse of data through screen scraping practices should be assessed under an antitrust perspective. In this respect, in 2013 the Court of Milan ruled that Ryanair’s refusal to grant access to its database to the online travel agency Viaggiare S.r.l. amounted to an abuse of dominant position in the downstream market of information and intermediation on flights (decision of June 4, 2013 Viaggiare S.r.l. vs Ryanair Ltd). Indeed, a balance should be struck between the need to compensate the efforts and investments made by the creator of the database with the interest of third parties to be granted with access to information (especially in those cases where the latter are not entitled to copyright protection).

Additionally, web scraping triggers other issues which have not been considered by the ECJ’s ruling. These include, but are not limited to trademark law (i.e., whether the use of a company’s names/logos by the web scraper without consent may amount to trademark infringement), data protection (e.g., in case the scraping involves personal data), or unfair competition.


Source URL :http://yellowpagesdatascraping.blogspot.in/2015/07/ecj-clarifies-database-directive-scope.html

Saturday, 18 June 2016

An Easy Way For Data Extraction

There are so many data scraping tools are available in internet. With these tools you can you download large amount of data without any stress. From the past decade, the internet revolution has made the entire world as an information center. You can obtain any type of information from the internet. However, if you want any particular information on one task, you need search more websites. If you are interested in download all the information from the websites, you need to copy the information and pate in your documents. It seems a little bit hectic work for everyone. With these scraping tools, you can save your time, money and it reduces manual work.

The Web data extraction tool will extract the data from the HTML pages of the different websites and compares the data. Every day, there are so many websites are hosting in internet. It is not possible to see all the websites in a single day. With these data mining tool, you are able to view all the web pages in internet. If you are using a wide range of applications, these scraping tools are very much useful to you.

The data extraction software tool is used to compare the structured data in internet. There are so many search engines in internet will help you to find a website on a particular issue. The data in different sites is appears in different styles. This scraping expert will help you to compare the date in different site and structures the data for records.

And the web crawler software tool is used to index the web pages in the internet; it will move the data from internet to your hard disk. With this work, you can browse the internet much faster when connected. And the important use of this tool is if you are trying to download the data from internet in off peak hours. It will take a lot of time to download. However, with this tool you can download any data from internet at fast rate.There is another tool for business person is called email extractor. With this toll, you can easily target the customers email addresses. You can send advertisement for your product to the targeted customers at any time. This the best tool to find the database of the customers.

 Source  URL : http://ezinearticles.com/?An-Easy-Way-For-Data-Extraction&id=3517104