Web scraping in R using rVest

I am not much conversant with web scraping but I undersand the importance of the technique given the fact that a lot of very useful data is embedded in HTML pages. Hence I was very excited when I came across this blog post on rstudio site which introduced a new package called rvest for web scraping. The github repository of the package is here.

As an excersie in scraping web pages, I set out to get all the Exchange Traded Funds (ETF) data from London Stock Exchange web site.

First things first, load up the rvest package and set out the base url, a download location where the html will be saved. You can do this without having to download the file but there were same proxy setting in the environment I was working on which prevented me from doing this. So I opted to download the html, process it and then to delete it.

library("rvest")
url <- "http://www.londonstockexchange.com/exchange/prices-and-markets/ETFs/ETFs.html"
download_folder <- "C:/R/"
etf_table <- data.frame()

Next thing to determine will be how many pages are there in ETF table. If you visit the url you would find that just above the table where ETFs are displayed, there is string which will tell us how many pages there are. It was 38 when I was writing the script. If you look at the source html, this string appears in paragraph tag whose class is floatsx.

Time to call html_nodes to get the part of html with a paragraph with class floatsx and then run html_text to get the actual string. Then its a matter of taking a substring of complete string to get the number of pages.

#find how many pages are there
download.file(url,paste(download_folder,"ETFs.html",sep=""))
html <- html(paste(download_folder,"ETFs.html",sep=""))
pages <- html_text(html_node(html,"p.floatsx"))
pages <- as.numeric(substr(pages,nchar(pages)-1,nchar(pages)))

Now that we know how many pages are there, we want to iterate over each page and get ETF values from the table. Again load up the html and we call html_nodes but this time we are looking at all the tables. On this page there is just one table which displays all the ETF rates. So we are only interested in the first table.

#for each page
for (p in 1:pages) {
 cur_url <- paste(url,"?&page=",p,sep="")
 #download the file
 download.file(cur_url,paste(download_folder,p,".html",sep=""))
 #create html object
 html <- html(paste(download_folder,p,".html",sep=""))
 #look for tables on the page and get the first one
 table <- html_table(html_nodes(html,"table")[[1]])
 #only first 6 columns contain information that we need
 table <- table[1:6]
 #stick a timestamp at end
 table["Timestamp"] <- Sys.time()
 #add into the final results table
 etf_table <- rbind(etf_table,table)
 #remove the originally downloaded file
 file.remove(paste(download_folder,p,".html",sep=""))

 #summary
 summary(etf_table)
}

As you can see, rvest makes scrapping web data extremly simple so give it a try.The markdown file and knitted html is available on github link below if you want to run it in your own environment.
Github link

Advertisements

Estimating SQL Server Table Sizes

LichtenBytes

Microsoft gives guidance for calculating the estimated sizes of your tables and indexes at http://msdn.microsoft.com/en-us/library/ms175991.aspx. Performing such estimates is an important step in determining required server capacity.

Unfortunately, Microsoft’s guidance consists of long textual descriptions of the steps that need to be followed to perform the estimations. It takes some time to interpret the instructions given by Microsoft and perform the calculations.

With that in mind, I put together a spreadsheet to help me perform the calculations. To use the spreadsheet, I simply fill out the spreadsheet with the total number of rows in the table and details of the columns (data types and sizes) in the table and/or index. In my experience, the estimates provided by the spreadsheet are within 5-10% of the actual sizes of the tables/indexes, which has been good enough for my needs.

You can download the spreadsheet here.

To complete the spreadsheet, simply…

View original post 221 more words

A warning about SSIS Foreach Loop container to process files from folder

SSIS Foreach Loop Container is frequently used to process files from specific folder. If the names of the files are not known beforehand then the usual way is to process all the files with specific extension. For e.g. all CSV files, or all XLSX files etc.

Untill recently I was under the impression that if you put “*.csv” in the Files textbox of the Collection tab on Foreach Loop editor, SSIS would look for only CSV files. However, this is not true. It appears that when SSIS looks for specfic file types in folder the search is a pattern based search. So if you put *.CSV, it will also process files with extensions like *.CSV_New or *.CSVOriginal.

To test this, I created a folder which contained following files with variations on extension CSV.

1. File1.CSV
2. File2.CSV_ARCHIEVED
3. File3.CSVARCHIEVED
4. File4.CSV.ARCHIEVED
5. File5.CSV.ARCHIEVED.201411232359
6. File6.CSV20141129
7. File7.A.CSV
8. File8.csv
9. File9.cSv
10. File10 i.e. no extension

I then created a simple SSIS package with Foreach Loop container which will iterate over these files and script task within it which will show message box with current filename. The Files textbox of the Collection tab on Foreach Loop editor contained *.CSV. On running the package, following files were picked by the container.
1. File1.CSV
2. File2.CSV_ARCHIEVED
3. File3.CSVARCHIEVED
4. File6.CSV20141129
5. File7.A.CSV
6. File8.csv
7. File9.cSv

Not a normal scenario to have files with various extensions in same folder but who knows. 🙂
If you are not sure which files would be processed by SSIS Foreach Loop container, the best thing to do would be navigate to the folder in Windows explorere and put the file extension in the search box. Those are the files which SSIS would pick up.

Faster SSAS Processing by using Shared Memory Protocol

I came across this very handy tip to increase SSAS processing speed in SQL Server 2008 R2 Analysis Services Operations Guide and thought it is worth sharing.
The guide recommendes using Shared Memory Protocol when getting data from the relational data source if it is SQL Server and both SSAS and SQL Server are on the same physical box. Exchaning data over shared memory is much faster than over TCP/IP as you probably already know. You will need to perform two steps to force SSAS data source to use shared memory while querying underlying SQL Server.
1. Check that Shared Memory Protocol is enabled in SQL Server configuration manager.
2. Prefix the data source connection string in SSAS with :lpc like below.

Provider=SQLNCLI10.1;Data Source=lpc:ThisServer\INST1;Integrated Security=SSPI;Initial Catalog=WordMatch

The guide claims to have achieved 112,000 rows per second using TCP/IP where as 180,000 rows per second using shared memory which is impressive. In my own test, a slightly modified Adventure Works cube took 56 seconds to process using TCP-IP whereas 47 seconds using shared memory; an improvement of 16%.

99 Problems in R

In my Introduction to R post, I introduced R and provided some resources to learn it. I am learning R myself and finding the learning curve a bit steep. Anyway, the best way to learn a new programming language is to practice as much as possible. So inspired by 99 Problems in various languages (links below), I am creating ’99 Problems in R’ set. The project is on github. I am new to github but finding it easy to share code through github rather than here on the blog. Hopefully in future, I would make more use of github.
The files are in *.rmd format which can be opened in R Studio. I have also added knitted HTML files. The git repo is here.
https://github.com/saysmymind/99-Problems-R
Be warned that the solutions to problems are written by me; an amateur R programmer, so there might be better way of solving some of them. I will try to solve more problems and keep adding them to the repo. In the mean time, feel free to do pull request and peek at code.

I wish I can say ‘I got 99 problems but R ain’t one’ but alas I am not there yet. 🙂
99 Haskell Problems 

99 Python Problems

99 Prolog Problems

99 LISP Problems

99 Perl 6 Problems

99 OCaml Problems

Calculated distinct count measures in SSAS

Distinct count measures are fact-of-life for an SSAS developer. No matter how much we try to avoid them they are always asked for and we have to deal with them somehow. Another painful fact is, as you might already know, we cannot create multiple distinct count measures in the same measure group. So each distinct count measure has to sit in its own measure group which,IMO, does not look right when browsing the cube. In this post, I want to show a method of creating distinct count calculated measures which I found on Richard Lees blog here with slight modification.
http://richardlees.blogspot.co.uk/2008/10/alternative-to-physical-distinct-count.html

Using Adventure Works, let’s say the end users want to know the distinct count of customers who have placed orders on the internet. I can add a calculation like this in the cube

CREATE MEMBER CURRENTCUBE.[Measures].[Unique Customers]
AS COUNT(NONEMPTY([Customer].[Customer].[Customer].members,
[Measures].[Internet Order Count])),
VISIBLE = 1 , ASSOCIATED_MEASURE_GROUP = 'Internet Orders' ;

This is all fine and dandy, however, as soon as I added any attribute from customer dimension on the rows or filters, the results were showing incorrect values i.e. the same count of customers was repeated for all records.

The solution is to count the number of unique customers in the context of current attributes of customer dimension. For examples sake, lets take Customer City attribute. I tweaked the calculation like below to count customers only in the context of current members in City attributes and it started working as expected when Customer City attribute is used in rows, columns or filters.

CREATE MEMBER CURRENTCUBE.[Measures].[Unique Customers]
AS COUNT(NONEMPTY(
CROSSJOIN(
[Customer].[Customer].[Customer].members,
[Customer].[City].CurrentMember
),[Measures].[Internet Order Count])),
VISIBLE = 1 , ASSOCIATED_MEASURE_GROUP = 'Internet Orders' ;

Of course, you will have to add all the dimension attributes in the CROSSJOIN but ultimately a calculated, though complex, distinct count measure is better than having a number of physical distinct count measures IMHO.

 

Table variables are fast but #Temp tables are faster!!

This blog is sort of note to myself. Use-temp-tables-instead-of-table-variables-when-the-number-of-records-is-large. There it is and I promise to remember that every time I write DECLARE @something TABLE (… statement.
And now a bit of explanation. I recently worked on project where we added a new fact table to our datawareshouse. An SSIS package was created to get the data from source, do the dimension lookup and populate the fact table. Simple stuff, except that I had to do some data manipulation while selecting data from the source. Now, I could have just written a stored procedure and used that as data source but we are all aware of the pitfalls of that so I decided to use a table valued functions (TVF). TVFs have predictable metadata output and SSIS plays nice with them. However, you cannot use temp tables in TVF so invariably I ended up using a table variable. The package worked perfectly for smaller workloads but when we increased the number of records in table variable, the performance became more and more severe. A casual google search came up with a raft of blog post suggesting that performance culprit might be table variable. So I changed the function to a stored procedure with temp table and used the stored procedure in SSIS. Sure enough our performance issue was resolved like it never existed. Now, there are many blog posts by very intelligent people on why temp tables perform better than table variables but I wanted to check first hand and here’s my little investigation.

1. Create two temp table and poupulated them with random data. These have nothing to do with actual results they are just used for data population.

/* Create base tables which will contain random generated data */
IF OBJECT_ID('tempdb..#basetable') IS NOT NULL
	DROP TABLE #basetable

IF OBJECT_ID('tempdb..#basetable2') IS NOT NULL
	DROP TABLE #basetable2

CREATE TABLE #basetable (
	id INT
	,pdate&nbsp;DATETIME
	,sdate&nbsp;DATETIME
	)

CREATE TABLE #basetable2 (
	id INT
	,pdate&nbsp;DATETIME
	);

WITH id
AS (
	SELECT id = 1
	
	UNION ALL
	
	SELECT id + 1
	FROM id
	WHERE id <= 1000
	)
INSERT INTO #basetable
SELECT id
	,cast(cast(RAND() * 100000 AS INT) AS DATETIME)
	,cast(cast(RAND() * 100000 AS INT) AS DATETIME)
FROM id
OPTION (MAXRECURSION&nbsp;0)

;WITH id
AS (
	SELECT id = 1
	
	UNION ALL
	
	SELECT id + 1
	FROM id
	WHERE id <= 1000
	)
INSERT INTO #basetable2
SELECT id
	,cast(cast(RAND() * 100000 AS INT) AS DATETIME)
FROM id
OPTION (MAXRECURSION 0)

2. Get the records generated in temp table above into two table variable, join them and update the first one with values from the second one.

/* Get the records generated above in two temp variables. 
Join the table variables on id column and update pdate&nbsp;column */
DECLARE @table1&nbsp;TABLE (
	id INT
	,pdate&nbsp;DATETIME
	,sdate&nbsp;DATETIME
	)
DECLARE @table2&nbsp;TABLE (
	id INT
	,pdate&nbsp;DATETIME
	)

INSERT INTO @table1
SELECT id
	,pdate
	,sdate
FROM #basetable

INSERT INTO @table2
SELECT id
	,pdate
FROM #basetable2

UPDATE @table1
SET pdate&nbsp;= t2.pdate
FROM @table1&nbsp;t1
LEFT JOIN @table2 t2
	ON t1.id = t2.id

If you see the estimated and actual execution plan below, they are exactly same. SQL Server optimizer decides to use nested loop join because there are no statistics on table variables.
Not to mention this is very slow specially when I increased the number of records to over a million.

Estimated Query Execution plan when using table variable

Estimated Query Execution plan when using table variable

Actual Query Execution plan when using table variables

Actual Query Execution plan when using table variables

3. Get the records generated in temp table from step 1 in another set of temp tables, join them and update the first one with values from second one.

 
/* Get the records generated above in two temp tables. 
Join the two temp tables on id column and update pdate&nbsp;column */

IF OBJECT_ID('tempdb..#table1') IS NOT NULL
	DROP TABLE #table1

IF OBJECT_ID('tempdb..#table2') IS NOT NULL
	DROP TABLE #table2

CREATE TABLE #table1 (
	id INT
	,pdate&nbsp;DATETIME
	,sdate&nbsp;DATETIME
	)

CREATE TABLE #table2 (
	id INT
	,pdate&nbsp;DATETIME
	)

INSERT INTO #table1
SELECT id
	,pdate
	,sdate
FROM #basetable

INSERT INTO #table2
SELECT id
	,pdate
FROM #basetable2

UPDATE #table1
SET pdate = t2.pdate
FROM #table1 t1
LEFT JOIN #table2 t2
	ON t1.id = t2.id

Now look at the estimated and actual execution plan. SQL Server estimated using nested loop join but used hash-match which is faster in this situation. SQL Server was able to do this because there are statistics available on temporary tables.

Estimated Query Execution plan when using temp table

Estimated Query Execution plan when using temp table

Actual Query Execution plan when using temp table

Actual Query Execution plan when using temp table

This does not absolutely mean that table variables are not useful in any situation. They are brilliant when number of data in them is small (usually less than 1000 rows as rule of thumb). Creating a temp table in this situation would not give much, if at all, benefits. They are light on transaction log as well since their scope is limited to current execution context and they also lead to fewer stored procedure recompilation if used in stored procedure.
So next time you decide to use table variable have a second thought. Think a bit about the effect on the query execution plan and definitely don’t use them if you care moving large amount of data.