I like collecting bugs (as in software defects) or at least stories about tracking bugs. With bugs, my favorites are always subtle and hard to reproduce but involve a one line change (As a result I have a lot of bug stories related to threading).
This week I spent a number of hours tracking down an interesting defect. Basically, we had some new API calls that were spontaneously failing in our QA environment. After turning up the logging and seeking through I identified some exceptions that were thrown periodically and roughly corresponding to the times the web calls were failing. They included the following:
System.Data.SqlClient.SqlException: Transaction (Process ID 123) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
Happily, we have a great (and award winning!) database developer on a nearby team. He had planned on setting up MSSql traces with me, but, was actually able to identify the silver bullet by inspection and based on that message was able to identify a stored procedure and gave me a lesson on concurrency.
Concurrency is kind of hard to get right in this case.
I think we want
SELECT @id = TableId
FROM TableName WITH (HOLDLOCK)
WHERE Key = @myKey
SELECT @id = TableId
WHERE Key = @myKey
This makes a great bug story for collecting as it really was a one line change and took some time (for me) to locate and additionally I now have a really useful resource in the form of Michael’s Mythbuster Post . What I like about the post (besides the Mythbuster theme) is that it is systematic, comprehensive and gives you the tools to reproduce and “try this at home”. So if you have arrived to this page because you too are debugging a deadlock exception you will want to check out the above post.
I am quite excited about the opportunities that exist for eLearning startups and can think of a collection of projects that I would love to see emerge. But recently I wanted to take a somewhat wider perspective and so I went looking to what various commentators and bloggers had to say about the position of startup opportunities within the elearning sector and where overall investment might settle over the next few years. Some of the results of that investigation will continue to appear with suggestions and commentary on the Edge Challenge program’s Facebook Page but I wanted to detail some design implications for learning technology that emerged as a result of my exploration.
Ironically, the pursuit of the educational design topic emerged with a somewhat contrarian result that turned up when I was searching about applying technology in the classroom. The article discusses the frustration and failure experienced by other teachers trying to recreate Professor Michael Wesch’s focus on in-class use of technology and the related observation that the many effective and highly regarded instructors do not make technology central to method, but others do so quite successfully. This difference inspired me to wonder what elements of this observation were a fundamental attribute of applying technology and what aspects were situational. It also invites design and optimization questions about “what problem are you solving” with the technology or product.
“What problem are you solving” is a fundamental question that people in the startup community ask themselves and one that potential investors also ask. In the eLearning scenario, at some level the objective is to help learners acquire knowledge or competency. The discussion of excellent teachers in the article (and elsewhere) strongly suggests that the ideas of inspiration and wonder are critical precursors to learning, drawing a line, via engagement, through the objective and on to the actual learning outcome. Technology, then, is a delivery mechanism for strengthening either the wonder and inspiration on the one hand, or the knowledge transfer and competency practices on the other. While focussing on the situational use of technology, I was reminded that learning innovators should be taking great instructional and learning experiences and exploring the problems of delivering those experiences rather than directly trying to change how learning gets delivered (though that consequence may follow).
The idea that you should make small changes to user behavior in your product or technology design (even if you are a startup out to change the world) also has a strong lineage. This idea was popularized by Saheed Bhatia as founder at Hotmail (and documented by Jessica Livingston in Founders at Work): “… don’t try to change user behavior dramatically. If you are expecting people to dramatically change the way they do things, it’s not going to happen. Try to make it such that it’s a small change, yet an important one.”
This often seems like an explicit step in evaluating the potential of tech investments, to consider how rapidly a startup or product can grow in popularity: if it requires a dramatic change in user habits, it may be an undesirably slow opportunity.
My interim conclusion on strategy then? “New projects will succeed where they focus on technology that adjusts a small element of a proven successful approach with nearly no user impact. The technology needs to get out of the way to be successful.”
This sounds a lot like user-centric design principles, the concept of understanding a user’s needs and designing to meet them with minimal cognitive load. This very effective approach to design fits well with the business practice of a startup or project that proposes success through making “small, important change”. But a value-neutral design is not the only way of designing a product. Effective, impactful designs can also arise occur through a process that acknowledges the influence of design decisions. This important distinction is explored effectively at the FrogDesign’s DesignMind magazine.
Educational innovation does in fact have a history of design with consequence: the history of education in the industrial revolution has been thoroughly explored and focused on the intent of the education system. But back to the contrarian result: What is the conclusion related to technology initiatives of innovators?
I think we are in the formative days of eLearning technology with many opportunities just beginning to appear. In most cases, new projects need to focus on what problem they are trying to solve and to put the technology in the background supporting that objective. Whether we are trying to create technology that is transparent or technology that “has an opinion” the best way to validate an idea is to get out there and build something.
If you are someone who wants to get out there and build something new check out our Edge Challange.
Also checkout our partner program and reach millions of users.
Development of new software is a creative process at its core. It still requires all the discipline associated with mastery of a creative process but the act of bringing a product or technology into existence is not purely a formula.
I have been thinking about one facet of the activities surrounding software development this week: communicating what it is you are trying to do. Again, because this is a creative process, it is not trivial to express. In fact, it is often expressed approximately through the use of metaphor in the case of pitches by new companies. Most often this analogy takes the form of “It is X for Y” where X is the existing product someone can relate to and Y is the market the company wants to pursue. In fact this method is so common, it has become a cliché.
The analogy is a useful tool but a guided demo is a much richer way to communicate what you aim to accomplish, and lately I have been using them instead of other communication tools because of this richness. I have been finding that the subtle implications of a new piece of often complex software become immediately apparent when you demo. (Note, this is a collaborative demo intended to explain, not impress. This is not the product launch demo where you are trying to only highlight the best features.)
Here are some of the most beneficial side effects I have experienced with “The Demo”:
It Focuses On The User. When you present a demo, you implicitly highlight the role of the user and what they are trying to accomplish. Normally the conversation you present in a demo starts with something like, “If a student wants to open a document they select file, then open from the menu”: this is very effective use-case style language.
It Avoids Rationalization of Incompleteness. When working on code, you can easily feel that you are “almost” done something. When you prepare for a demo, you scenario-plan what you are going to show and in that process you highlight a more realistic picture of how close you are. This realistic picture can help shed light on two things. Firstly, it can help with the prioritization you must do in an agile development model. Secondly, while you can fake a nice looking demo, yet leave a lot of the work incomplete under the surface (for a great exploration of this see the iceberg secret), collaborative demo planning can reveal where you’ve done this ahead of the actual presentation.
It Encourages Feedback. The live, interactive process of a guided demo invites better feedback than trying written or even static visual examples. Recorded videos do not benefit from this as much as a live and interactive demo. To gain this benefit and have a chance to use the feedback, the project must be planned in such a way as to get the demo prepared early in the process. You must also conduct the demo in a way that allows for, and welcomes, live questions and feedback as you proceed.
It is Memorable. Often times when you talk about a product with an internal team, the other people just need to know about the software’s edges so they can incorporate it into their planning and appropriately route related requests or questions. The guided demo, presumably because of its multi-sensory process and present, engaging action, has greater capacity to be memorable. (It could be interesting to examine what happens to the mirror neurons of the audience during a demo.)
Rather than just presenting the status of your software it can also be effective to demo in such a way as to communicate teh direction and design elements associated with a particular feature. It doesn’t naturally emerge from the demo structure but you can consciously include it. In this case, language like “In order to support a familiar user experience, the student starts by selecting File -> New, where you can see the list of activities. We expect to expand this list to include other activities.” This presents an opportunity to synchronize on the rationale (“consistent user experience”) and communicate technology roadmap direction (“expanding activity list”) in a way that stays natural and within the presentation’s context.
If you are in the early stages of a project, consider the demo as one aspect of your communication to others. Good luck and let me know any stories that result from your experiences with presenting guided demos.
We are in the middle of expanding our set of samples for Desire2Learn Valence and an early example made its way up on to the site today. I had the pleasure of getting a great email from a partner that I was helping. They had found the sample on their own and they had indicated how helpful it was to have it as a starting place for understanding and using the new RESTful APIs.
This reminded me of an axiom shared to me by an old-school (as in “back-in-my-day-we-had-to-use-punch-cards”) developer that I worked with in my first internship: “All code can trace its lineage to HelloWorld“.
Now, I don’t believe that lines of code from any of the various HelloWorld samples play any significant role in most large-scale systems. However, it does play an enormous role in the training process of developers as they understand the capabilities of a platform on their way to using that platform as part of system.
Platforms benefit from a variety of samples exploring different aspects of an API or different approaches but there will always be a special place for the “HelloWorld” or “Getting Started” type sample: developers almost always start with the real feasibility step of “getting something up”. In the course of creating samples for Desire2Learn APIs we have been discussing some properties that make a good “HelloWorld” Sample.
Time to “Up”. I call this “time” but it really includes the concept of “ease”. Personally I measure this in two ways: firstly, starting from the main entry point to the SDK documentation; secondly, starting from Google and searching something like “ HelloWorld“. Ultimately the process needs to get you to the sample itself, explain any pre-conditions, and fit nicely into an existing development environment. Then you need to run the code and something needs to appear. This is a particularly important challenge in multi-component platform systems where there is often a separate server or service in play.
SNR. Signal-to-noise ratio is a second important aspect of a “Getting Started” sample. When you finally have the sample running and go look at the source it needs to predominately be related to the basic operation of the sample. There are a lot of great ways of organizing code to enable plugins or to make it future proof to changes. Those great techniques belong elsewhere. Additionally, this is one area where I tend to demote the importance of aesthetic concerns or at least ensure that the visual or graphic design elements remain in static data kept separate from the sample code (except of course for APIs directly related to rich graphics systems).
Security and Error Checking. Typical security checks and fault handling are often omitted from written descriptions explaining samples, but, I believe this is one area where the original axiom of “tracing an apps lineage to HelloWorld” is material. HelloWorld is typically a source of code snippets when it comes time to do the real work of integrating with a platform or API. Patterns of security practices such as cleaning inputs and of error handling are hard to reintroduce if those code snippets migrate too far without appropriate review. So this is one area where being comprehensive rather than concise makes sense; even if the error handling is changed when the final system is created, at least there is existing code that presents itself and acts as reminder or token to the developer that something needs to be done to satisfy the error conditions or security implications of their app.
Language and Environment Specific. This is not so much a binary ” yes or no ” as it is an arms race. There are often a lot of choices for language or environment and each developer would be most productive working exclusively in their preferred language. Often the best you can do is get a representative sampling in place.
Ultimately, samples are a key part of process in communicating basic information to developers new to a particular aspect of an SDK, and “HelloWorld” samples have a special place.
You can look forward to more samples on Valence over the next weeks designed to these criteria.