A Business Technology Place

Play it again.

Repetition in youth.

That song was awesome! Play it again. Rewind-stop-play-rewind-stop-play-forward-play, call the radio station for a request, press repeat. These actions are all part of my memory as a youth. Back in the days when I spent a fair amount of my income on cassette tapes and CDs. It was that song. It was my favorite song. It was our special song.  It represented something of value in mood, a feeling, a lyric, or a sound. Whatever the case when I found it, I wanted to do it again. Play it again and again.

Repetition in business.

The year has changed. My daily routines have changed. My friends have changed. But one thing that is still there is an attraction to repeat what works in business. Play it again! When a project or implementation goes well we talk about “lessons learned”. Sure we record the bad, but we also record what went right. Then we try to do it again. We want to replicate the secret formula for the same good results.

But software projects are all different. The requirements change. The people may change. The customer may change. Repeating success is not as easy as hitting rewind-stop-play. We could different results holding all but one of those variables the same.

Magic in the interpretation.

Two musicians will play the same song differently. It’s their interpretation, their emphasis, and their feeling. Software programming can be the same way. Two different programmers will create distinctly different programs that accomplish the same goal. The results may be visible in the UI or noticed by different workflows and program speed. It’s all part of their interpretation and skill.

Programmers “play it again” when they establish repeatable processes and procedures. It’s the software development life cycle (SDLC). Strictness to process is encouraged, but the real magic happens when the programmer is allowed to interpret , feel, and create on the edges.

So yes, play it again in business and software development. But play with feeling. Play from the heart. Find that unique rhythm. Create a one of a kind. Stop-rewind-play.

 

QR codes are dumb codes.

Jeb Cashin - GravatarBob and others I’ve worked with know I don’t like QR codes (for marketing anyway.) They may not always remember why I don’t like them. QR codes are a re-tooling of an ancient technology (2D bar codes) that were re-born to fail by the very trend they attempted to build on: smart phones. And they don’t really solve a problem for the user, rather they create a problem of having to get a QR code reader.

They are also ugly blotches to incorporate into design.

Barcodes (whether 1 dimensional or 2 dimensional) are “machine readable” codes that are designed to be read very quickly by very dumb machines. QR codes are a marketing fad that serve no other purpose other than to use them.

Smart phones are not dumb. They are smart enough to recognize faces, text, and objects. Try out Google Goggles if you haven’t. I had one recognize Dwight Shrute’s head on a mug! Smart phones don’t need dumb print blotches.

I just redeemed an iTunes card using a feature I had not seen before: “Use your camera.” I clicked on it, held the card up to the camera, and was surprised by what happened. I thought it was going to read the barcode on the back. Instead, it found the human-readable text and read that.

iTunes uses camera to read $10 gift card code.

Two things to note:

1. Not directly related, but note how the camera view is backwards, which is a default for some reason with computers and phones. I think this mirror view is just more comfortable because things move the way we have been trained (with hand held mirrors.)

2. But the iTunes reader quickly found and boxed in the human-readable text code (not bar code), interpreted it, and displayed it back to me with a reinforcing message. It was almost instantaneous. And it was friendly enough to show me the code frontwards (not backwards like the actual shot.)

That’s smart cod(ing).

Interactions over Processes (user story template)

The first value statement in the Agile Manifesto is “Individuals and interactions over processes and tools”. Mike McLaughlin, of Version One, questions the role and influence the growing agile tool set has on individuals and interactions. McLaughlin states that the capability of Agile based software tools adds scalability and efficiencies, but that this does not remove the need for team members to interact with each other. His point is that we need to stick to the original intent of the manifesto language. Focus on the interactions, not the process.

It pulls like gravity.
It is human nature to gravitate toward stricter process and away from individuals and interactions. Organizations want to be lean to contain costs. Organizations want processes to provide consistent and efficient deliverables. These forces pull organizations toward larger processes and push employees away from the importance of the human interaction.

In my experience, the downside of a process is that it tends to become the goal of many who follow it. I’ve seen employees feel like they have accomplished their goal simply because they followed the 10 step process and populated the fields on a form. Usually when people reach this point they stop thinking about the people and interactions on the other side of the process.

Recently, a co-worker shared with me that she went to an in-network doctor for a procedure. The doctor chose to use a second medical provider for a part of the medical services. When the insurance company received the claim, they said the second medical provider was out of network, so the insurance company charged a different amount for their services. They penalized the insured as if it was the choice of the insured as to what secondary service the primary doctor chose to use. The process in this case missed the mark of the original intent. But it took over one year of appeals to get a reversal of course from the insurance policy.

A user story.
I recently wrote a project charter to frame-up an upcoming project. I chose a traditional project charter template because the original request for the project work was vague enough that it included a large number of possible outcomes. So I wanted a document format that could aid me with getting clarification for the project goals and better define the boundaries of the project project. To compile the charter I had to interview several business stakeholders to learn about program rules, pricing rules, and system constraints.

I used the project charter as a basis to then move into agile user stories. Now, I’ll be honest. I have not used the agile user story approach previously for requirements gathering. I have created use-cases in the past and user story follows a similar type of approach. But true to the agile manifesto, the user stories put less emphasis on a grand template and more emphasis on a conversation. I like the user story approach because it is setup to get requirement information from a project stakeholder through the use of everyday language. “As a , I want , so that . This type of approach encourages user interaction. It encourages natural dialogue to document requirements.

The result of interactions.
At the end of the user stories, I had a document that showed how the people involved in the project would interact and use a system and what they expected to get from it. To get there I had to interview and interact. Dare I say that’s a good process to follow for software development? 😉

But seriously, I think it steers discussion towards the benefits of the project work and away from the fields in a form. There is a structure to the user story template. But the structure is almost invisible because it adheres to natural language. I wasn’t even aware of the form structure while I conversed with the business stakeholder to gather input. At the end of our discussion, I had a user story. I had an artifact that could be used to begin design and coding. I had experienced the value of the first value statement in the Agile Manifesto.

User Story and Acceptance Criteria

User Story and Acceptance Criteria

Email delivery isn’t guaranteed

A recent service incident with email provided a reminder to me that email delivery is not guaranteed for message delivery. In the incident, the sender was using email to deliver a specific 1-to-1 message. It was not a marketing campaign. Since the distribution list was large, they chose to send the emails in batch through a provider. The problem was that some of the recipients never received the email message even though the ESP stats showed 100% delivery rate.

Photo credit: Dimitrios Kaisaris

Photo credit: Dimitrios Kaisaris

There are multiple factors that affect email delivery and some of them are out of the control of the sender. The sender can control attributes such as subject and message content but they can’t completely control what happens after the email is handed to an ESP on behalf of the recipient. From that point there are spam/bulk mail rules at the ESP as well as personalized mailbox rules for the recipient that affect the message delivery.

By chance, I was registering for Peachtree Road Race (10K) which requires a lottery for registration. I noticed the following statement on their explanation of how runners would be notified of the lottery results.

All individuals within the “Group” will be informed of their selection into the 2013 Peachtree by March 25 via email or through the searchable results on peachtreeroadrace.org and ajc.com/peachtree.

Email is efficient, fast, and low cost. But email delivery is not guaranteed. So when we use email to deliver specific B2C or B2B messages that require some type of acknowledgement, it’s a good idea to augment the message with another form of message delivery such as a postal mail piece or electronic posting on a web site.

2 Server Patching Approaches

Server patching is one of those infrastructure tasks that most business owners and infrastructure managers don’t link to the eCommerce portfolio. Operating system patches, virus scanner updates, database patches, etc. are all system level software not directly related to the code line of the eCommerce team. Infrastructure managers can certainly create a patching strategy independent of the eCommerce development process, but I like two methods that keep the current version of eCommerce code in the forefront to help minimize risk and make the patching process more efficient.

1. Follow code line promotions

The premise behind this approach is to install and promote server patches in same sequence of non-production environments as the code line increments. So for example a shop might have an environment sequence for code something like this:

Development ⇒ QA ⇒ Beta ⇒ Production

Let say there is a code release version 2.2 that is currently in development. Then as version 2.2 is deployed in sequence to each environment the infrastructure manager would additionally apply all new patches.

The advantage to this approach is that as the code line is tested in each environment the patches are part of that testing. This prevents the team from having a duplicate testing event just for patch validation and should provide a more thorough set of testing before production deployment. Testing patches for potential system incompatibility is always the biggest risk to production deployments. So this approach tries to mitigate the risk by having it tested by various groups in different environments.

A disadvantage to this approach is that the need for server patching may arise independent of the release schedule. Emergency vulnerability patches may arise due to a hacker exploit or some other industry event.

2. Snapshot and test

The idea of this approach is to make a copy of the environment that you will patch and then install the patches to the copied environment. The test team will then validate that there are no code incompatibilities by testing against the new environment. Once approved, the infrastructure manager can promote the patched environment to production or patch the original production environment.

The advantage to this approach is that it can run at intervals independent of code releases. That could be useful if there is an emergency vulnerability patch and the next code release isn’t scheduled for production deployment for another four weeks.

The disadvantage to this approach is that it requires more administrative overhead and management. Depending on how it’s executed you could duplicate tasks. Additionally it creates a separate testing event for the validation team that is independent of the normal test validation associated with code installs.

In practice, it may be best to use both approaches. Following code line promotions is a risk mitigation approach that doesn’t duplicate testing efforts and would work for a routine patch update schedule. The snapshot and test approach works for off-cycle patches that have a more urgent requirement for deployment.