Monthly Archives: April 2010

Windows Azure Online Workshop

Microsoft is offering a free, two-hour online workshop for developing apps for Windows Azure.  The course includes a full access Azure account for 2 weeks.  Seems like a good opportunity to learn how to develop apps for Microsoft’s cloud platform.

If you are in the Boston area, there is also a one-day event on May 8th at Microsoft NERD called Boston Azure Firestarter.  The event will feature classes and labs on developing for the Azure platform.

You will need to have Visual Studio 2008 or Visual Studio 2010 installed, as well as the Azure Tools for Visual Studio.


Startup Lessons Learned Conference

Yesterday was the Startup Lessons Learned Conference in San Francisco.  Although I heard it was a beautiful day in SF, I enjoyed the conference from Cambridge, compliments of the Lean Startup Circle – Boston.  It was a great day, met a lot of interesting people in Cambridge, and heard a lot of interesting folks via the simulcast.

This was my first time at a meetup of the Lean Startup Circle in Boston and I really enjoyed it.  The group may gravitate a bit to the entrepreneurial side, but that is not unexpected.  Still I would like to see a broader spectrum of people and industries embracing the ideas of lean/agile, but it is a movement and that will take some time.  But thanks again to Matthew Mamet and John Prendergast for putting the event together.

Keynote – Eric Ries

Eric Ries kicked things off with the keynote.  The takeaway for me here was the definition of a startup:

A human institution creating a new product or service under conditions of extreme uncertainty.

It has nothing to do with the size of a company, what industry it is in, how it is funded, etc.  We need to develop practices that are geared towards managing and responding to that uncertainty.  He talked a lot about the Build, Measure, Learn loop (which Kent Beck later turned around to the Learn, Measure, Build loop), ways to fail fast, and knowing when it was time to pivot.  If you follow Eric’s blog there was not much new here, but he was setting the table for the presentations and case studies that would follow.

Build, Measure, Learn – Kent Beck

Kent Beck followed Eric Ries.  Kent was part of the Agile Manifesto and invented test-driven development and pair programming.  As a developer, TDD and pair programming are two of the most powerful tools/practices I have seen in all my years.  As I mentioned earlier, Kent talked about the Build, Measure, Learn loop, which makes sense from an engineering/development point of view, but by running the loop backwards, we are pulling the build (from Lean Manufacturing‘s concept of pull) from what we learn.

Kent also went through other aspects of the Agile Manifesto: people over processes, collaboration, accepting change.  He was preaching to the choir, so he didn’t go into too much depth.    He did go into some depth on knowing when to live with a simple solution to get a product out (he used the term “hackery”) and when you need to refactor.  As an example, he cited Amazon Dynamo.  At the scale they operate at now, this is an essential service, but if the had tried to start with that, they would have ended up in the scrap heap.  As a developer, I have seen many projects get behind by over-designing or over-engineering early on.  Getting out there early allows you to see if you have a viable product and as you learn more about usage/features you can refine your design and implementation.

Agile Development Case Study – Farb Nivi

A real highlight for me was the Agile Development case study at Grockit, by Farb Nivi.  Farb went into a lot of great info about accepting change/chaos, how to write good user stories (make them narratives, not “to do” lists).  He talked about an Agile planning  tool they use called Pivotal Tracker that I need to take a look at.  His presentation also had a very cool feel to it and was done with a tool called Prezi.

The thing that really caught me about Farb’s presentation was the development culture and the way they have embraced pair programming.  The interview process at Grockit is simple.  Candidates come in and pair program with one of the development team.  I can’t think of a better way to get to know whether a candidate is someone I would want to work with and has sound software engineering skills. I have had a few posts on the value of pair programming and the challenges in getting more developers to adopt it, so having this question about a candidate answered up front is invaluable.

Grockit’s philosophy on languages is simple and straightforward too.  They don’t care what languages or tools you have used, if you have good skills and work well as part of a team you will be able to produce good code in whatever languages they are using.  That is an incredibly refreshing and enlightened attitude (although on their website they say you need to have programmed in Java and Ruby for at least a year.  Jeez, Farb which is it LOL).  Developers at Grockit spend 70-80% of their time pairing, with the reminder spent on spike’s or some other stand-alone tasks.

Getting to Plan B – Randy Komisar

After the break, Eric Ries sat down with Randy Komisar in a talk titled “Getting to Plan B“.  This was a great back and forth discussion between Eric and Randy.  Randy talked about answering the “Leap of Faith” questions.  These are the questions that you need to answer, that if you don’t know the answer to will kill your product.  They can be technical, marketing, whatever.

Randy also talked about the importance of a “dashboard”  to guide you and track your progress: are you measuring the right things to answer your leap of faith questions.  I found some interesting articles with the Sloan Review and Sramana Mitra that touch on a lot of what he talked about.  When the presentations go online I strongly recommend watching this presentation.

In Closing

There were several other good presentations, more than I can write about.  When the presentations go online I strongly recommend checking them out and finding the things that hit home for you.  And thanks again to the folks at the Lean Startup Circle – Boston for putting on the simulcast and giving a place for people to get together and exchange ideas.

Update: The videos from the Startup Lessons Learned conference are available on

R&D IT Best Practices for Growing Small/Mid-Sized Biopharmas

The Mass Technology Leadership Council put on a round table discussion on best practices for small/mid sized biopharmas at Microsoft NERD on April 14.  The agenda is shown below

As smaller biopharma companies grow and mature into larger organizations, they need to transition from relatively ad hoc, spreadsheet heavy, lightly supported R&D IT environments to more comprehensive, scalable platforms and support models. This presents substantial challenges to scientists, managers, and IT professionals, ranging from prioritization of new features and functions, creation of new workflows, budget and staffing issues, in-house platforms versus COTS (commercial-off-the-shelf) platforms, and getting support for necessary changes. Our expert panel will discuss these issues and more in this MA Technology Leadership Council Life Science Cluster roundtable discussion.

The discussion covered a range of topics from virtualization of IT, data management, build vs. buy, and collaborating with scientists.

Virtualization and Cloud Computing

Cloud computing is a very hot topic these days and there is obviously a lot of interest in this area for small and mid-sized biopharmas.  Being able to pay-as-you go has a lot of appeal and the reduced infrastructure costs are appealing.  Being comfortable with having your hardware, software, and especially your data outside of your direct control is still an issue for a lot of people.  Regulatory issues like HIPAA and protecting your intellectual property are still concerns for people.  Uploading large data sets, on the order of gigabytes, is still not a workable solution and it is often easier and cheaper to ship the data sets to be uploaded.

There was a lot of discussion about security of data and whether you are more secure having your data hosted by some organization that (hopefully) has significant resources dedicated to security or are you more at risk with having data in house.  A lot of data and IP leaves most companies every day in the form of laptops, USB sticks, email attachments, etc, which presents its own set of security concerns.

Collaborating With Scientists

There was a very good discussion about how to collaborate with the scientists to build tools to manage data.  The take-away from this discussion was to understand their work flow and to learn what the do instead of just asking what they need.  By understanding their workflow you are in a better position to understand their needs.  One of the panelists made an interesting comment that scientists tend to feel that what they do is unique, and that is why there may be some resistance to a commercial-off-the-shelf (COTS) solution.  They also tend to shut down when given a complete solution, and are much more receptive to being part of the process of defining the solution.

As a practitioner of Agile, this makes perfect sense to me.  You can’t deliver a solution to your end users if you don’t understand their workflow and they are an integral part of defining the solution.  Also by building solutions incrementally, you allow for refinement and early feedback from your end-users.

Data Management Policies

One of the themes that all the panelists touched on was trying to deal with data being spread across the company in a variety of formats.  When a company is in its early stages, lots of “data” resides in spreadsheets and PowerPoint presentations, and most people know where it is or can give you enough clues to find it, e.g. “Bob did a presentation back in May that….”.

As companies grow this data becomes harder and harder to manage.   Content management tools like SharePoint and Documentum, or LIMS systems can address issues like these, but there are setup and maintenance costs that can be an issue for a small start-up.

Probably the biggest takeaway for me from the entire session was the notion of putting a data management strategy in place early on.  The strategy would address issues like where/how to store data.  A policy would not need to be restrictive, just some basic rules.  That way when the need to add tools and support to manage the data arises, incorporating the data into the system will be easier.

Coming at it from an Agile perspective, this makes a lot of sense.  Do the simplest thing, i.e. a basic policy of how to store your data, before going to a heavyweight approach of a full blown LIMS or content management system.  The key here would seem to be to make sure the policy is easy to understand and implement and is not a barrier to innovation.  The last thing you want is people NOT using the policy because it is an impediment to progress.  Coming back to the idea of collaboration and understanding your user’s workflow, you want to make your user part of defining the policy.

Build vs Buy

Build versus buy needs to be balanced by the start-up costs and learning curve of using COTS solution versus the long term costs of maintaining custom solutions.

As time goes on, the maintenance costs for custom solutions can become prohibitive, especially as the amount of data (and required features) grows and/or the principles who developed the tools move on.

Open source tools have a lot of appeal as they offer some level of off-the-shelf capabilities and community support with the ability to customize if needed.  People in the bioinformatics space are often not comfortable with proprietary algorithms/solutions and want to be able to see what’s under the covers.

Other Interesting Nuggets

Other interesting things that came up are…

Periodic technical audits of your tools, processes, etc.  Having someone from the outside come in and take a look at how you are doing things.  Are you behind the curve, have you become numb to certain pain points that can be fixed.  Finding someone to do this who you trust and doesn’t have a vested interest in selling you a particular solution may be a challenge, but I found this to be an intriguing idea.

What sample tracking systems are available that manage secondary samples.  For example if you do extractions or build a library from a base sample, how do you trace that relationship back to the parent sample.

As companies move towards a regulated space, tools that provide an audit trail are very appealing to IT groups.

In summary it was an interesting discussion and gave me a lot to think about.