Your verification ID is: guDlT7MCuIOFFHSbB3jPFN5QLaQ Big Computing: June 2014

Monday, June 30, 2014

Introduction to using Random Forests for the Kaggle Titanic Data Set

During the summer a number of the members of the Connecticut R User Group decided to work on a Kaggle competition data set to improve our R programming skills. The first data set we tried was the Titanic data set. This is a fairly simple data set from which we are trying to predict who will survive and who will not. The task for the team was to simply load the data set and the Random Forest package and run the basic model. We completed that task last week. The R code to do that is below:

#load in Package randomForest
## Read in Titanic Data training and test set<-read.csv("~/titanic/train(2).csv")<-read.csv("~/titanic/test.csv")
## Convert data into a simpler dataframe
train<<- data.frame($Survived,
test<<- data.frame($Age,
## now we need to get rid of the NAs and make
train$fare[ train$fare) ]   <- 0
train$age[ train$age) ]     <- 30
test$fare[ test$fare) ]   <- 0
test$age[ test$age) ]     <- 30
labels <- as.factor(train[,1])
train <- train[,-1]
# fit a random forest and make a prediction
rf <- randomForest(train, labels, xtest=test, ntree=100,do.trace=TRUE)
results<<-predictions <- levels(labels)[rf$test$predicted]

As we do more work on the model and try other approached I will update on my blog.

Tuesday, June 10, 2014

Taco Bell's Waffle Taco a novelty that needs to go the way of the pet rock.

This morning I decided to try the Waffle Taco at Taco Bell. What a mistake! It was terrible. A tasteless frozen waffle with rubbery pseudo food inside. Do not ever even bother trying this because you will just regret it like I do. No wonder they use old people in their commericals for this item.Their taste buds are already shot.

Sunday, June 8, 2014

Reading a large number of files into R

I know this is a fairly basic topic, but it is one that caused me problems lately. Normally I only have to read in one data file at a time or I read in a few tables separately.

If I am reading in a single file would do the following


 or if it is online


If it is a csv file


Now the problem arose because I needed to read in 400 files from a directory, but the files were not numerically indexed. So to solve this problem I used the functions list.files and paste.

>complete_names<-paste( "~directory", names, sep="")
>for (i in sep_along(names){
         monitors<-rbind(monitors, read.csv(complete_names[i]))

It was slow, but got the job done.