agile

Applying some agile principles to managing marketing programs, when adding one meeting kills others

For about a year my current marketing team has instituted a practice of a “Daily Sync”, similar but not identical to the concept of a daily stand-up meeting in Agile development. Coming from an engineering management background it was more of an experiment for me to see if the benefits of stand-up meetings for development would transfer to a marketing team. I’m always willing to try something, learn from it and if necessary drop the idea if it doesn’t work. In our case our Daily meetings have gone well and resulted in fewer meetings overall. I do ask the team if the meetings are useful, and they do say they are … but since I’m the “boss” I do worry sometimes if people are just saying it because it’s me asking. Since the meetings tend to be pretty dynamic and everybody participates I’m assuming they are serving their purpose for now.

Our team is split across 4 cities in the US so we make use of video conferencing for our meetings (Skype, Google Hangouts, WebEx would all do the trick) and they’re scheduled for 30 minutes. Some days we go over what each person is doing, other days we tend to focus on one area and people bring up issues and ideas by exception.

This daily sync has ultimately sped up our decision making time, resulted in eliminating per project weekly meetings and just connected our distributed team better. By using a video conference the body language and visual queues are obvious and ability to quickly collaborate on documents to knock out any edits or ideas is great. I’m on plenty of audio conferences on a daily basis too and they just aren’t the same (of course a video conference isn’t always practical, but tools like Google Hangouts do allow you to call in a participant by phone while others are on video).

If you’re a marketing team working on multiple programs (remote or at the same office) try it out, a quick 30 minute daily sync and see if you can also get rid of all the detailed project review meetings and eliminate those never ending e-mail threads 🙂

Strategy to execution, lessons learned and mistakes along the way

On the recommendation of a colleague I recently read the The Lean Startup by Eric Ries (Mark Mitchell wrote a review on this book if you’re interested). It got me to thinking about many projects I’ve worked on including launching online communities at National Instruments, to a new FPGA based software defined radio (SDR) tool, to a cloud based development environments and cloud based services for IoT devices. The online communities, with many follow-on iterations and improvements have proved extremely successful while the others have some more proving to do.

Even though these projects all went I think the projects could have been more effective, and executed more efficiently with less time and resource waste. In hindsight I and my teams would have better off by being more systematic and combining some of the points made in The Lean Startup while using a framework like the Diamond Strategy by Donald C. Hambrick and James W. Fredrickson to define our vision and fundamental assumptions.

Adapted from Hambrick, D. C., & Fredrickson, J. W. (2001). Are you sure you have a strategy? Academy of Management Executive, 19 (4), 51–62. (Source: http://2012books.lardbucket.org/books/management-principles-v1.0/s09-06-formulating-organizational-and.html)

In the case of the LabVIEW DSP Design Module, targeted at FPGA synthesis for SDR applications we were able to successfully achieve real time LTE up-link and down-links with a high level graphical development and design capture tool. There were many lessons learned but early on one of the turning points was when we put the tool in front of real communications engineers. Their feedback resulted in significant changes to the graphical model for design capture and also helped us define what a minimum viable product needs to really be (quality of results, number of MIMO channels, wireless standards to support) before we could exposed to the tool to more people. You can see a demo in this video.

In other projects, ironically in some of my cloud based research projects which lend themselves to broader exposure and experimentation, we did more internal thinking and definition without validating key needs with prospects as we could have. This is more than likely because the “cloud” was so different from standard products we were used to, which if I think about it should have had us talking to real world prospects even sooner!

Taking an idea from a concept and vision, to implementing it and iterating on it is a real challenge whatever the market and application. In today’s fast paced and dynamic nature, most of us would be better off articulating that vision, our assumptions and doing what we can to validate them with a real customer and prospect. It’s always a challenge to resist the temptation to wait and deliver what “we” think is the ideal solution, but that delay and lack of input increases the risk we’ll miss the mark on functionality and time to market.

Process … what is it good for?

NI has a well defined development process to release software and hardware products. Developers are exposed to some things and not to others. For example, most developers don’t really get exposed to pre-release meetings and product planning meetings that serve as checkpoints as a product goes through different stages and is ready to be manufactured (CD’s or DVD’s in the case of software as well as web downloads).

Today in our team meeting we discussed the iterative process we’ve been using. I wrote a little on this topic a few weeks ago in the context of the LabVIEW Project feature released in LabVIEW 8.0

Note: By team I’m not referring to the entire LabVIEW group. I’m referring to a specific team working on features focused on improving how people deploy their distributed applications.

The current development process is “based on” Scrum which is a flavor of the Agile process.  We’re pragmatic in our approach to development and really look at any process with an attitude of how it works best for us. One of the developers in the team tried to convey that some process really is necessary even thought the initial reaction we sometimes get when we hear the “P” word is borderline hatred. A free for all doesn’t work and he said something along the lines of that we should think of process as being “the difference between doing whatever you want and making rules that we all agree are useful to follow”. I thought that was a nice way of putting it. Now, one of my roles is that of a project manager so some level of process for me is needed just to stay sane but it is always the intent and goal of any process that is the most important thing. If those goals aren’t being met the process isn’t working (for whatever reason).

One of the reasons we’re using a Scrum and iterative development approach is that there are elements of what we’re exploring that aren’t well defined from a requirements or design standpoint and require exploration. We could go off and write a detailed spec after a number of meetings, then create a design and implement it, test it and release. Most people would say that really you go through this cycle a few times for many features. What we’ve been doing is a little different than a detailed specification up front.

We’ve defined  a number of high level areas and functional requirements. For each iteration we, really our Program Manager “Jen”, select the set of things from our list we want to tackle. Here’s an image of a PowerPoint representation of what was whiteboarded in real time to describe this during our meeting for some people at a remote office.

scrum

For early iterations we’ve created more what you could call demo scripts based on primary use cases. These are vertical slices that to work correctly require integration between the different system components. The output of those iterations has been a greater understanding of our needs, integration and validation of the systems components, and code that is more or less “prototype quality”.

After an iteration areas we then feel are now understood are treated differently. These get detailed specifications, designs and the implementation is expected to not be “prototype quality” but getting close to an alpha stage or feature complete. For areas that are still unresolved we try and continue in a prototyping mode since we need more work to get rid of the “not quite wrapped our heads around this” feeling.

As we’re now further in development we have a mix of subsystems in different states. Some are in what I’ll call “prototype and exploration” and others are in “defined and implementing”. The mix over time has changed and continues to change, being more heavily weighted to “defined and implementing” as we progress.

I’m a fan of this type of iterative approach because it recognizes that there are some unknowns and risks to any project and puts into place a process to uncover those unknowns and risks in an ongoing fashion. I’ll try and have some more follow-up posts on this topics since it has been a discussion area of late in my team.