Integrating Agile and UX Methodology

Both Agile development and User Experience (UX) design methodologies have been popular in software design and project management spheres as of late, and despite their popularity, there has been little effort to integrate them. Each approaches development from different perspectives—Agile is more focused on coding and project management, while UX is concerned with the usability of the product and actual user interface. Of course, these are two sides of the same coin. A software solution is unusable without some form of user interface, and a user interface is but an empty shell without a quality software product behind it.  

Ferreira, Sharp, & Robinson (2011) suggest a framework for Agile and UX integration, founded on five principles:

  1. The user should be involved in the development process; 
  2. Designers and developers must be willing to communicate and work together extremely closely; 
  3. Designers must be willing to feed the developer with prototypes and user feedback; 
  4. UCD practitioners must be given ample time in order to discover the basic users’ needs before any code; and, 
  5. Agile/UCD integration must exist within a cohesive project management framework.

The framework itself has both the typical software developers and UX designers running in parallel iterations, giving and receiving feedback in each iteration, and the UX team working one Sprint ahead of the development team. Both start with a Sprint 0 to obtain context and task analysis for the project ahead. This ultimately generates User Stories, which are then distributed across the Sprints. These Stories first go through the UX team before being delivered to the development team, so that the UX designers can begin with the User Stories and intended outcomes to produce the interface. The authors observed both design and development teams in action, and noted many areas for integration and improvement. Most notably, the authors found that UX designers had no User Stories specific to them and found it difficult to design one sprint ahead. Usually, they were either on the same Sprint as the developers or one behind.

The authors present a solid argument for integrating Agile and UX methodologies, and since the article publication in 2011, the idea has caught on (e.g., Gothelf 2018). A variety of Agile- and UX- flavored methods are out there in the DevOps world at any moment and have dedicated followers and applications. 


Ferreira, J., Sharp, H., & Robinson, H. (2011). User experience design and agile development: managing cooperation through articulation work. Software: Practice and Experience, 41(9), 963-974. doi:10.1002/spe.1012

Gothelf, J. (2018). Here is how UX design integrates with Agile and Scrum. Retrieved from

MetroMaps and T-Cubes: Beyond Gantt Charts

Martínez, Dolado, & Presedo (2010) discuss two visual modeling tools for software development and planning, MetroMap and T-Cube. This discussion is in the context of greater attention being paid to the development process and metrics, not just the software engineering itself. A concession the authors make very early on is that Gantt charts are the prevalent method for project mapping in organizations, and that the research to date shows they are not effective for communicating, especially when different groups are involved. Enter the MetroMap, a way of visualizing abstract train-of-thought information that communicates both high-level and detailed information to viewers.

Image courtesy of Martínez, Dolado, & Presedo (2010)

T-Cube visualization is reminiscent of a Rubik’s Cube, utilizing the three-dimensional nature of a physical cube, the individual cubes making up the whole, and the facets (colors) on each individual cube. These correspond to tasks and attributes. The authors utilized a specific software set to illustrate these concepts, represented in the article. As the tasks and attributes are written independently, they can be represented by workgroup, type of task, module or time.

These two methods have their strengths and weaknesses, both individually and together. At first glance, it is obvious that the MetroMap can represent many indicators at once while the T-Cube can only show one at a time. MetroMap uses a variety of icons and styles to represent information while the T-Cube uses traditional treemaps. The authors size up the tools in a simple comparison table, noting that MetroMap generally has the edge on viewing a lot of information at once.

Features and benefits are great, but how does actual use differ? Is one easier than the other in practice? The authors examined a shortest-path route to accomplish the same task in both tools, and found that MetroMap was the most efficient in multiple scenarios. In all cases the actions were more basic and straightforward. Overall, either tool is more informative and effective than Gantt charts. Access to information and ability to understand it are paramount in any planning and development exercise. These are two tools that better enable that.


Martínez, A., Dolado, J., & Presedo, C. (2010). Software Project Visualization Using Task Oriented Metaphors. JSEA, 3, 1015-1026.

Decision making with Delphi

The Delphi method brings subject matter experts with a range of experiences together in multiple rounds of questioning to arrive at the strongest consensus possible on a topic or series of topics (Okoli & Pawlowski, 2004; Pulat, 2014). The first round is typically used to generate the ideas for subsequent rounds’ weighting and prioritizing, by way of a questionnaire. This first round is the most qualitative of the steps. Subsequent rounds are more quantitative. According to Pulat (2014), ideas are listed and prioritized by a weighted point system with no communication between the subject matter experts. This is meant to avoid confrontation (Dalkey and Helmer, 1963). Results and available data requested by one or more experts can be shown to all experts, or new information that is considered potentially relevant by an expert (Dalkey & Helmer, 1963; Pulat, 2014). 

While Delphi begins with and keeps a sense of qualitative research about it, traditional forecasting utilizes mostly quantitative methods, utilizing mathematical formulations and extrapolations as mechanical bases (Wade, 2012). Using past behavior as a predictor of future positioning, a most likely scenario is extrapolated (Wade, 2012; Wade, 2014). This scenario modeling confines planning to a formulaic process much like regression modeling. Both Delphi and traditional forecasting utilize quantitative methods, the difference being to what degree. A key question in deciding which method to use is what personalities are involved. Delphi methodology gives the most consideration to big personalities and potentially fragile egos, avoiding any direct confrontation or disagreements.


Dalkey, N., & Helmer, O. (1963). An experimental application of the Delphi method to the use of experts. Management Science9(3), 458-467.

Okoli, C., & Pawlowski, S. D. (2004). The Delphi method as a research tool: an example, design considerations and applications. Information & Management42(1), 15-29.

Pulat, B. (2014) Lean/six sigma black belt certification workshop: Body of knowledge. Creative Insights, LLC.

Wade, W. (2012) Scenario Planning: A Field Guide to the Future. John Wiley & Sons P&T. VitalSource Bookshelf Online.

Five thousand days of the World Wide Web

In 2007, Kevin Kelly looked back on the last 5,000 days of the World Wide Web and asked: what’s to come? Now, with years of hindsight since that talk, we ask: what next?

One thing I have to call attention to here is the latter part of the talk, in which Kelly discusses codependency and the exchange of privacy for convenience. Total personalization equals total transparency. From a development and data perspective, nothing is outlandish about that statement. But as we have seen in the social fabric over the last few years, not everyone understands or agrees with that logic. There is a demand for personalization without the transparency. I believe the watershed moment in that space will be a split between those who eschew all personalization in order to maintain privacy, and those who are determined to innovate a way around having personalization and privacy to the degree that we expect now.

That is not my prediction for an innovation in the next two decades. For that, think back to 2012, when Google Glass was first introduced to the public. It was a product ahead of its time and failed to gain traction. Less than ten years later, Google is refining the product for a more sophisticated release and targeted audiences are paying attention. Looking ahead to 2030 and beyond, augmented reality products will be as commonplace as the personal vital signs wearable (Apple Watch) or natural language processor in the living room (Amazon Alexa). Forces working in their favor are both tangible and intangible. Augmented reality is already here, most notably in current iPhone models. This has introduced the concept in an incremental and friendly way in an existing device as opposed to a bombshell new product class. Consumers are able to experience the tangible technology on devices they are already familiar with, gain confidence, and accept the new products that push the envelope. These are a mix of technological, cultural, and social forces.

These same forces can work against adoption. The development of augmented reality now centers around headsets and devices with cameras, but what of the technologies that can project fully-functional desktops and workstations into the ephemera to be touched and manipulated as though they were physically there? The interface running Tony Stark’s lab in Iron Man is not run through Google Glass but is just simply there. Assuming these can be done, take my earlier point about transparency and privacy, and apply it to these technologies that, by definition, augment the very reality we function in. If people are uncomfortable now with the personalization/transparency tradeoff, a new device that alters how they see and interact with the world might simply be a bridge too far.


Dimandis, P. H. (2019). Augmented 2030: the apps, headsets, and lenses getting us there. Retrieved from

Kelly, K. (2007). The next 5,000 days of the web. Retrieved from

When Collaborative Learning isn’t an Open-Office Plan for Kids

According to Adams Becker et al. (2017, p. 20), “the advent of educational technology is spurring more collaborative learning opportunities,” driving innovation in a symbiotic relationship that pushes development in both areas as a product of the other. Collaborative learning and the technological developments that help drive it are trends but not fads.

The confluence of collaborative learning and educational technology.

It may be easy to draw parallels between collaborative learning and open-plan offices. This corporate architecture fad of recent years does appear to be in the same spirit as collaborative learning, and the technological tools that are used in conjunction with open-plan offices—or in spite of them—do have a supporting relationship. But the similarities stop there. Open-plan offices had the best intentions of creating more face-to-face interaction with colleagues but has been proven to reduce such interaction by a drastic margin, pushing employees to use alternative text-based methods of communication in light of social pressure to “look busy” amongst coworkers (James, 2019).

There is one carry-over from corporate collaboration that is fruitful in collaborative learning spaces: synchronous communication via messaging apps such as Slack. These interactions are purposeful and augment the authentic active learning students engage in with collaborative learning. Just as workers do not operate in silos within an organization, students are encouraged to engage with others in various collaborative methods. Educational research and practice reinforce these lessons learned from the corporate world, and are helpful forces driving innovation and advancement in collaborative learning practice and technology.


Adams Becker, S., Cummins, M., Davis, A., Freeman, A., Hall Giesinger, C., & Ananthanarayanan, V. (2017). NMC Horizon report: 2017 higher education edition. Austin, TX: T. N. M. Consortium.

James, G. (2019). Open-plan offices literally make you stupid, according to Harvard.  Retrieved from

Variables and Measures, or People and Goals?

Just as any IT implementation shouldn’t be for its own sake—that is, it should serve a business purpose within the sponsoring organization and not simply be a cost center—quantitative analysis within the context of an organization should likewise serve a business purpose. For example, there must be some reason a widget manufacturer commissions a study of its customer base. It wasn’t brought up just to keep the research division busy. There are typically research questions and hypotheses that exist and guide the methodology.

In my own research consulting work, I have often started with broad research questions that then drive more narrow research questions and/or particular segment analyses. At the analysis level, the variables and desired outcomes are examined in order to determine what test to use. From that point, it is easy to get lost in the vocabulary of quantitative analysis and forget that the work is being done to answer a business question.

No alt text provided for this image

For example, assuming the National Widget Company commissioned that study of its customer base, I could simply report the measures of central tendency and leave them to interpret why there’s a difference between the mean and median ages. But a true data scientist/analyst helps explain why the numbers mean what they do, and ensures the business users don’t get lost in the lingo. I would take the time to explain that the mean age is 42.5, the median age is 37, and that difference indicates there are more instances of older customers than younger and possibly some outliers bringing that mean age up. I would then turn back to them and ask what this means for their business. Remember that as the analyst, we are not the business subject-matter experts. Offering the numbers to the business and asking them to provide context creates more opportunities for synergy.

Consider another example involving correlation. Two variables, or points of interest as we would call them: widget sales and distance from a major airport. A strong negative correlation (r=-0.49) is found. First we must caution against equating correlation and causation. We would then pivot away from the r-value and put the focus back on the variables of interest: it appears that an individual who lives closer to a major airport is more likely to buy these widgets. Again, we would put the question back on the business to then have a conversation about why these variables might be related and the possible covariates.

In either case, and in any analytics situation, proper use of visualization is paramount. In the latter example it is much easier to see what a high r-value means on a scatterplot as opposed to explaining it verbally. Data visualization bridges many gaps that numbers and words simply cannot fill. These are the languages of dashboards, executive roll-ups, and KPIs.

Overall, the primary thing to remember in keeping an audience engaged in a discussion around quantitative research is this: the variables of interest are the reason for the study, not the numbers themselves. Keep the focus on what matters.

What Makes Big Data “Big?”

I’ve never been a fan of buzzwords. The latests source of my discomfort is the term thought leader, which is one of those ubiquitous but necessary phrases in almost every professional space. That hasn’t kept me from poking fun at it, though, as I believe we should be able to laugh at ourselves and not take things too seriously.

No alt text provided for this image

Big Data is a buzzword. But it’s also my career.

What is the difference between regular, conventional, garden-variety data and Big Data? There’s a lot we could say here, but they key differences that come to mind for me are use, size, scope, and storage. I immediately think of two specific datasets I’ve used for teaching purposes: LendingClub and Stattleship.

LendingClub posts their loan history (anonymized, of course) for public consumption so that any audience may feed it into an engine or tool of their choice for analysis. I’ve used this dataset before to demonstrate predictive modeling and how financial institutions use it to aid decision-making in loan approvals. Stattleship is a sports data service with an API that allows access to a myriad of major league sports data. They also provide a custom wrapper to be used in R, and I’ve used these tools to teach R.

One of the primary differences between big data and conventional data is use case. Take these two datasets, for example. The architects of these sets understand that a variety of users will be downloading the data for various reasons, and there is no specific use case intended for either set. The possibilities are endless. With smaller troves of data, we typically have an intended use attached, and the data is specific to that use. Not so with big data.

These datasets illustrate two other key factors in big data: size and scope. Again, the datasets are not at all meant to answer one specific question or have a narrow focus. Sizing is often at least in gigabytes or terabytes—and in many cases tipping over into petabytes. The freedom to explore multiple lines of inquiry is inherent in big data sets without any sort of restriction on scope.

Finally, the storage and maintenance of big data is another key difference that sets it apart from conventional datasets. The trend of moving database operations offsite and using Database-as-a-Service models have enabled the growth of big data, as has the development of distributed computing and storage. Smaller conventional datasets do not require such an infrastructure and are not quite as impactful on a company’s bottom line.

Please, Stop Calling it a Hack

Have you used a lifehack? How does one exactly hack life? Chances are, you’re not a hacker. Using a binder clip in new and mind-blowing ways does not bestow a title upon you held by the likes of Kevin Mitnick and Sandra Bullock’s character in The Net. You just used a trick, a tip, or one bullet of listicle clickbait to use something in a different way.

Ushered in by the wildly popular Lifehacker blog (which I readily admit to reading), the term hack has come to replace a variety of words meaning tip. Perhaps it’s a desire to be hipster and ironic, or frame everything in terms of technology, or perhaps as Nikil Saval of the Pacific Standard called it in 2014, the “cult of self-optimization:”

Life-hacking wouldn’t be popular if it didn’t tap into something deeply corroded about the way work has, without much resistance, managed to invade every corner of our lives. The idea started out as a somewhat earnest response to the problem of fragmented attention and overwork—an attempt to reclaim some leisure time and autonomy from the demands of boundaryless labor. But it has since become just another hectoring paradigm of self-improvement.

To be sure, the underlying rationale for a “hack” is productivity, and even the cupcake-eating hack is about eating smarter, not harder (and maximizing the amount of cupcake you can get in your mouth with the least amount of mess). Yes, leave it to the lifehackers to turn something as innocent and joyous as eating a cupcake into an exercise measured in input, output, and waste.

When we move from tips and tricks to hacks, we introduce the assumption of “you’re doing it wrong.” Think of every single one of these lifehack lists as the annoying IT guy in your office who makes you feel incredibly stupid when you ask a simple technology question. I’ve been eating cupcakes for over 30 years and I don’t find anything particularly wrong with how it’s done. I know the different keys on my keyring without painting them in nail polish. I was a straw through the inverted tab of a soda can when I was a teenager, well before any clickbait list instructed me to.

So my quarrel is with both the word and the assumption. Calling something a hack doesn’t make it any more useful or chic than it was when it was a tip or a trick; in fact, it’s the etymological equivalent of a hipster flannel shirt and scarf. Likewise, it carries the pretentious assumption that it is inherently better while at the same time being fashionable before it was cool – think of George Costanza indignantly eating a Snickers bar with a knife and fork. Hacks are for the computer security world. Outside of that realm, it’s only short for hackneyed, and it most certainly is.

Finally, I’ll leave it to the folks at to put a slightly more blunt spin on this.

My Response to “Learn Your History”

A Confederate flag is displayed at the South Carolina state capitol in Columbia January 9, 2008. Many U.S. presidential campaigns shift their focus to South Carolina today for their first test in the south--the historic flag, which until 2000 flew from the capitol dome, is for some a symbol of the state's political and racial divisions.  REUTERS/Jonathan Ernst   (UNITED STATES) - RTX5DUD
A Confederate flag is displayed at the South Carolina state capitol in Columbia January 9, 2008. Many U.S. presidential campaigns shift their focus to South Carolina today for their first test in the south–the historic flag, which until 2000 flew from the capitol dome, is for some a symbol of the state’s political and racial divisions. REUTERS/Jonathan Ernst (UNITED STATES) – RTX5DUD

As of today, our governor has called for the Confederate flag to come down from the monument on the State House grounds.

For something so divisive, you’d think there would be celebration.

But amidst the almost-universal praise, there is a strain of grumbling that goes something like this:
Learn your history! It’s about heritage, not racism!

History is a tricky subject. We can think that it’s an objective account of our past, or we can acknowledge that it is a subjective mish-mash of narratives that managed to win out over other narratives. To be sure, a history of Native Americans in the United States would be written differently from the Native American perspective than those who drove them out of their lands. Curriculum wars have been waged over how certain history has been told in school textbooks. If history was objective, we wouldn’t have these issues.

So when I hear “Learn your history,” I ask: Whose history? I’m a white male Southerner who spent a lot of time in past years studying how the Confederacy came to be, because I wanted to find out which was true: heritage or hate? I love flag design and owned a few Civil War era flags of South Carolina. I read Calhoun and Davis. I have been heavily involved in my own local history initiatives. I studied other instances of secession in world history. I could even see some of the constitutional rationale for secession, on paper and in a vacuum, so to speak. None of that could erase the fact that, despite the causes we may assign to Southern secession and the lingering Confederacy, there were clearly racist motives amidst the non-racist ones in the founding of the CSA. To that end, the flags represent it.

Flags are about active causes. It’s one thing to have a monument to the Confederate dead, but flying a battle flag in their memory doesn’t do anything other than imply we still support that cause today. It’s safe to say those men died for their country–CSA, at the time–and I would imagine their idea of patriotism today would beholden them to the same United States Flag that we fly today as Americans. (That brings up another point: flying those two flags at the same time seems completely illogical to me, but that’s another post.)

When we–white, southern, reasonably comfortable, and only connected to the Civil War by memory or distant relatives who were fighting to keep their economic interests alive–say “learn your history,” we are actually saying “learn the history that makes this flag okay.” The problem is, it’s not that simple. History is checkered. History is subjective. You cannot tell me that a descendant of a slave in South Carolina has the same perspective on history that we do. When we say “learn your history,” we are once again imposing our will over those who don’t have the luxury of a Gone With The Wind recollection of history.

If you can put together an entire nation, or state, or county of people who think that battle flag represents them, then secede again and fly it. But as it stands, state government represents all state residents, and enough of them don’t share the same history you do of that symbol. What’s wrong with being a good neighbor and letting it go?

A Quick Word on SC’s Confederate Flag

A Confederate flag is displayed at the South Carolina state capitol in Columbia January 9, 2008. Many U.S. presidential campaigns shift their focus to South Carolina today for their first test in the south--the historic flag, which until 2000 flew from the capitol dome, is for some a symbol of the state's political and racial divisions.  REUTERS/Jonathan Ernst   (UNITED STATES) - RTX5DUD
A Confederate flag is displayed at the South Carolina state capitol in Columbia January 9, 2008. Many U.S. presidential campaigns shift their focus to South Carolina today for their first test in the south–the historic flag, which until 2000 flew from the capitol dome, is for some a symbol of the state’s political and racial divisions. REUTERS/Jonathan Ernst (UNITED STATES) – RTX5DUD

I am a SC native and resident. I remember every gubernatorial election and debate that ran up to the removal of the flag from the Statehouse dome in 2000. I also enjoy vexillology and history. Right now, the national conversation about the shootings in Charleston (the deeds of a racist madman) has renewed debate on removing the flag altogether from the Statehouse grounds (in my mind, a wise decison). Looking past the value judgments, let’s examine some frequently asked questions.

Q: How did it get there?

A: In 1962, the all-white legislature voted to place the flag atop the dome in what was considered an oppositional gesture against the Civil Rights Movement. The official reason given was the anniversary of the Civil War, but that would have been 1961, so it’s anyone’s guess. It remained there ever since.

Q: Why is it flying over a monument now?

A: What flies over the Confederate monument on the Statehouse grounds is a slightly different flag. What was above the dome was the Confederate Naval Jack; what flies over the monument is the Army of Northern Virginia Battle Flag, designed by William Porcher Miles. In 2000, the State Legislature passed the South Carolina Heritage Act, which effectively removed the flag from atop the dome and placed a different flag at the monument.

Q: Why wasn’t that flag placed at half-staff when the others were?

A: Logistics and symbolism. Logistically, the flagpole at the monument doesn’t adjust. So it’s either all the way up or all the way down. Symbolically, that flag is not representative of a sovereign entity. Flags at half-staff are usually either federal, state, or local. You may see private homes that fly their decorative flags at half-staff, but insofar as flag code goes, only sovereign flags are of consequence. The flag at the monument is a memorial flag and represents no sovereign entity, so it doesn’t count.

Q: Okay, so it’s a historical flag. Why isn’t it in a museum?

A: The Heritage Act requires a vote of the Legislature before any action is taken on the monument and flag. As of this writing, there is some support in the Legislature to take it down. We will see how that pans out. (Editorial: Why opponents of removing the flag believe that taking it down will somehow dishonor the memory of the dead is beyond me. No one has asked to tear up the monument. It’s only a question of removing a banner that is causing a lot of trouble and heartache. Seems to me that we would better honor past and present by compromising.)