A bit of insight here – lessons learned from past work experiences.
Part 3 is live.
In the final part of our series, we look at the characteristics of a data-centric culture and how to adopt such a culture in your own enterprise.
Part 2 is live.
“To become a data-centric enterprise, you must first recognize what counterproductive behaviors you’ll need to eliminate.”
How many times have you sat in a meeting and seen it go off the rails because the company gets distracted by “new and shiny” instead of taking care of fundamentals first?
I wrote this series on that exact problem. Part 1 published today.
This concept might sound familiar, as I’ve hinted at it in a previous post.
If you’re familiar with the expression, or perhaps have seen the eponymous film, you understand the idea of something with far less importance or weight driving a much bigger process. In the film’s case, the expression was used to characterize a completely fabricated war shifting attention away from an actual scandal. For our purposes here, consider it this way: a business purchasing their end-use BI tool before crafting the strategy behind what they want and how they want to use it.
It’s a tempting situation. Vendors do a very good job of promoting their business intelligence tools, and there’s nothing wrong with that. But a company can’t rely on that alone to solve the big questions. You wouldn’t buy a dishwasher and then build a house around it…so why rush to invest in a BI tool before you’ve determined exactly what you want out of it and what questions the business wants to answer?
This over-reliance on proprietary tools has, at least for me, encouraged a focus on open-source BI tools. My most common tools of choice are MySQL for relational databases, RStudio for ETL and analytics, Shiny for R-based deployable visualizations, Orange for GUI-based analytics, and Git for source control. There are other tools, to be sure, and the beauty of the open-source sphere is the constant evolution. Beyond that, you are guaranteed not to invest in a proprietary solution that will be obsolete in a few years.
But more importantly–and where this fits into my point of wagging the dog–an open-source solution allows your company to pilot potential tools and solutions without the same level of risk and investment a proprietary solution may yield. I have seen companies invest plenty of money in proprietary solutions before they thought through the business process and wound up spending a tremendous amount time and money trying to make that solution work for what they needed even after they realized the tool was not right for them. They let the tail wag the dog.
Software is a tool, not a solution. Be sure you know what a tool needs to do for you before you choose it.
For further reading:
Being platform-agnostic, and not letting the tail wag the dog, is a critical part of Business Intelligence efforts. I’ve written on this before, and firmly believe that choosing a particular software solution should be done after the strategies and business cases for the BI efforts are crafted.
To that end, open-source software is a favorite. There’s a conference coming up near me in April. I’ve registered, and I hope to see you there.
In his book Everybody Lies, Seth Stephens-Davidowitz discusses the Doppelgänger Discovery method used most notably in baseball, in the case of slugger David Ortiz. Doppelgänger Discovery is a way to load up a model with as many data points about a person as possible and find their statistical twins. In the case of David Ortiz, it proved that he wasn’t quite out of his prime, based on the career arcs of other players just like him.
We are slightly modifying the scenario here. Let’s assume you are charged with selecting participants for a particularly difficult professional development program that requires a specific personality profile and resume for someone to truly get the most out of it. You have 3 spots open, and 3 idealized candidate profiles that represent those individuals who would be best suited to participate. There are 4 key factors to match on, and just sorting names in a spreadsheet doesn’t really cut it. As with most analytics scenarios, there’s an R package for that. There are several. I’ve used and prefer MatchIt.
First, get your data straight. In this case, we want a spreadsheet with our individual identifiers (names, Person X, or participant numbers), groups (control vs selection), and the factors to match on. Something like this:
Let’s figure out who would be our ideal candidates. First, install the MatchIt library via your package loader. Next, load your spreadsheet (assuming a CSV format) as a dataframe named matching.
The following script calls the MatchIt package and performs the matching:
# Call the library library(MatchIt) # Initialize set.seed(1234) # Run matching function; all 4 factors are equally weighted match.it <- matchit(Group ~ Factor1 + Factor2 + Factor3 + Factor4, data = matching, method="nearest", ratio=1) a <- summary(match.it) # Put matched set in a new data frame df.match <- match.data(match.it)[1:ncol(matching)] # Plot the results plot(match.it, type = 'jitter', interactive = FALSE)
Now, you have a data frame with the 3 prototypical candidates and the 3 chosen candidates. Keep in mind you do not have a 1:1 correspondence here, as these are nearest-neighbor matches. See the documentation for more information on alternate methods and exact matching.
Those who have been in any sort of sociological research field should very familiar with the available survey platforms out on the web now (e.g., SurveyMonkey, SurveyGizmo, or LimeSurvey). Getting your results usually involves a multi-step generate/export/import cycle. Is there a better way?
I asked the question when using R to digest a survey deployed on SurveyGizmo. With so many R packages out there, I had a hunch there was something to help me get my results from SG into R without having to run through the generate/export/import cycle. Enter RSurveyGizmo, a package that does exactly that.
Beyond aggregates and analytics, the survey results in SurveyGizmo should be stored elsewhere for future use. This raises more questions about ETL from the website itself to your database of choice. In this case, let’s assume we have a MySQL database running on Amazon AWS. I recommend this over a MSSQL instance because of the difficulty of using an ODBC connection on anything other than Windows (but it can be done).
- SurveyGizmo account with surveys already active
- MySQL database established on Amazon AWS
- You know your host, port, dbname, username, and password for your MySQL database on Amazon AWS
- R version 3.4.2
Part I: SurveyGizmo
- Log into your SurveyGizmo account and head over to your API access options. Find that under Account > Integrations > Manage API.
- If you don’t have an active API key listed, Create an API Key. You will then see the API key listed for your user account. Copy that key to a text editor, as you will need it momentarily.
- Go back to your SurveyGizmo home page and view the surveys you have out there. Choose one and click on it.
- You’ll be taken to the survey build page and the address will be something like https://app.surveygizmo.com/builder/build/id/xxxxxxx where xxxxxxx is a unique number. Copy that number to a text editor, as you will need it momentarily too.
Part II: R + SurveyGizmo
- Install RSurveyGizmo via devtools.
- Construct the script to grab your survey. You will need the API key and survey number.
library(Rsurveygizmo) api <- "your_api_key" my.data <- pullsg(survey_number, api, completes_only=T)
- You will see loading progress and, depending on the size of your survey, will have a frame full of data in just a few moments. (Sometimes I get a JSON error, but it resolves itself in a few minutes.) SurveyGizmo does have API call limits, so please be judicious with how many times you do this. It’s generally good to run the process once you have enough data to start writing your analytics scripts, then again once the survey is closed.
- This is the simplest of the methods in the RSurveyGizmo package. You will want to explore the package documentation to learn all it can do for you.
Part III: R + MySQL
- Install the RMySQL package via your package loader.
- Construct the script to establish your connection, filling in your specific details.
# load RMySQL library(RMySQL) # establish the MySQL connection con <- dbConnect(RMySQL::MySQL(), username = "user", password = "password", host = "name.something.zone.rds.amazonaws.com", port = 3306, dbname = "mydb" )
- Now con will serve as your pipeline for the RMySQL calls.
- Two common methods are dbWriteTable and dbSendQuery. As you might expect, to write an R data frame to a table in your MySQL database, you use dbWriteTable:
dbWriteTable(con, "table_name", dataframe.name, overwrite=TRUE)
Using overwrite=TRUE means your table is essentially dropped and recreated, rather than appended.
To get an existing MySQL table into a new R data frame, you’d use dbSendQuery:
newframe = dbSendQuery(con, "SELECT * FROM mydb.mytable")
- Here’s a wrinkle, though. SurveyGizmo downloads come with concatenated column names that may not be very helpful. I prefer to convert all my column names to a standard format and establish a reference table with all the original questions matched up. The following script grabs all the column names from an existing data frame and creates a table with a standard “qxxx” format matched to the original question name.
# get question text into vector Question_Text <- colnames(mydata.original) # get length of that vector sq <- length(Question_Text) # generate sequence based on that length QKey <- sprintf("q%03d",seq(1:sq)) # make a new data frame with the QKeys matched to the original question text mydata.questions <- data.frame(QKey, Question_Text) # replace original question text with the those keys colnames(mydata.original) = as.character(QKey);
Now you have two frames: mydata.original with uniform column names, and mydata.questions with those column names matched to the original text.
Assuming you want to get those frames into your MySQL database, use the following:
dbWriteTable(con, "mydata_questions",mydata.questions, overwrite=TRUE) dbWriteTable(con, "mydata_original",mydata.original, overwrite=TRUE)
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew update brew install unixodbc
3. Install freeTDS via Terminal
brew install freetds --with-unixodbc
[MSSQL] Description = Microsoft SQL Server driver Driver = /usr/local/Cellar/freetds/0.95.18/lib/libtdsodbc.so
[MY_SQL_SERVER] host = myazureserver.database.windows.net port = 1433 tds version = 7.0
[myazureserver] Driver=/usr/local/lib/libtdsodbc.so Trace=No Server=myazureserver.database.windows.net Port=1433 TDS_Version=8.0 Database=myazuredatabase
isql -v myazureserver user pass
ln -vs /usr/local/Cellar/freetds/0.95.18/etc/freetds.conf ~/.freetds.conf ln -vs /usr/local/Cellar/unixodbc/2.3.2_1/etc/odbc.ini ~/.odbc.ini ln -vs /usr/local/Cellar/unixodbc/2.3.2_1/etc/odbcinst.ini ~/.odbcinst.ini
# install RODBC package (can comment this out once run) install.packages("RODBC", type = "source") # call RODBC package library(RODBC) # create a connection mycon <- odbcConnect("myazureserver", uid="user", pwd="pass") # see what it looks like: mycon # Select the top 100 records from table dbo.Table and load into dataframe "rs" rs <- sqlQuery(mycon, "SELECT TOP (100) * FROM dbo.Table")
On February 26, 2008, Starbucks stores across the country closed for 3.5 hours for what CEO Howard Schultz characterized as “a reaffirmation of [their] coffee leadership.” An estimated $4-6 million in sales were lost, rival coffee stores offered promotions taking advantage of the competitor’s down time, and reactions were wildly mixed. This was an incredibly bold move in the midst of the Great Recession. Just when consumers needed signs of confidence from their trusted brands, a staple goes dark? After expanding at a breakneck speed, why was Starbucks stepping back?
I had very little insight into what went into this decision before reading Schultz’s book, Onward. In it, he explains how the drive to grow had overtaken the fundamentals of the company. In an environment of increasing demands at the front line, Starbucks had fallen into bad practices, even with the best of intentions. It became necessary to take a step back. It was time to refocus, retrain, and recommit to the Starbucks Experience. As Jon Picoult notes, Schultz did not view this as a cost—it was “a smart investment in the education of his employees.” Beyond that, it was a courageous move . . . one that ultimately worked out in the company’s favor.
In any business environment, the prospect of shutting down and stepping back from production to refocus on internal housekeeping seems contrary to conventional wisdom. It may be interpreted as a sign of weakness or lack of organization. But that same race for deliverables and production can introduce corner-cutting or ad-hoc fixes that are never meant to be sustainable. In Starbucks’ case, for example, baristas were pre-steaming milk for lattes and cappuccinos. This compromised the beverage.
Business intelligence efforts are particularly susceptible to the race for deliverables. Think of an analytics group bombarded with report requirements from different business units. Their revenue depends on the justification for these reports. In some cases, the source data may be given to them without sufficient explanation or rationale, and they are asked to make sense of it on the fly. Billable hours and the drive to “just get it done” takes precedence. A cycle of short-term fixes emerges and no clear ownership of the data is established.
This is a process driven by fear. Being a martyr to productivity is not only selfish, it is irresponsible. Starbucks recognized they were slinging an inferior product and chose to refocus. Howard Schultz had the courage to stand up and step back . . . and that’s just coffee. In the business domain, what critical data products get rushed out to production and are mediocre at best?
Someone must recognize that the vicious cycle is untenable. It might seem contrary to the pressures of billable hours and deliverables, but ultimately is a smart investment in the sustainability of processes. Taking the time up front to stop and get the house in order precludes the repeated short-term fixes that would inevitably snowball. It’s a courageous move amidst competing pressures.