first_imgThere are plenty of methods to catch and fix bugs before a piece of software is shipped, but Microsoft is testing a new one that may be a wee bit invasive for developers: biometrics.A Microsoft Research paper entitled “Using Psycho-Physiological Measures to Assess Task Difficulty in Software Development” details experiments with developer biometrics, or monitoring a developer’s eye movements, physical and mental characteristics as they code to measure alertness and stress levels to indicate a higher probability of code errors.Researchers Andrew Begel, Thomas Fritz, Sebastian Mueller, Serap Yigit-Elliott, and Manuela Zueger conducted a study of 15 developers where they strapped psycho-physiological biometric sensors, including an eye tracker, an electrodermal (skin sensor measuring sweat) sensor and an EEG (brainwave) sensor, to developers as they programmed various tasks. The study found that biometrics could predict task difficulty for a new developer 64.99% of the time. For a new development task, the researchers found biometrics to be 84.38% accurate.Microsoft’s researchers concluded that biometrics could better predict when software errors will occur than traditional testing processes looking for defects and bug risks in software metrics. They do admit, however, that a host of internal and external factors including personality, personal life stresses and even time of day could throw off the biometric readings. In a bubble, biometric readings and predictions may give a more accurate picture of a developer’s stress levels and state of mind during coding. But on top of the fact that the constant feeling of being watched and monitored could contribute to developer stress in the first place, placing a painstaking emphasis on catching bugs during coding discounts the importance of software testing to the software development life cycle. Thinking with a Dev/Test mindset of catching errors and preventing bugs before QA testing occurs is a valuable trait for developers, but software testers are there for a reason. Their job is to anticipate where a bug will occur and to know the whole of the delivered software inside and out. Whether testers are accomplishing this by manual or automated means, biometric readings are not a replacement for professional knowledge and human instinct.Not to mention, a group of 15 developers is hardly a representative sample.(Related: How enterprises are maintaining testing quality in a Continuously Delivered world)Their conclusion states, “It is possible to use fewer sensors and still retain the ability to accurately classify task difficulty,” and that they hope the research will lead to the development of predictive programming support tools. But once you start strapping sensors to a developer’s forehead and skin while tracking every subtle eye movement they make, does it really matter how many sensors they’re using? Probably not to the developer.Read the full Microsoft Research paper here.last_img read more

Read More

first_imgIt was more than 30 years ago that Microsoft Windows was first released.  At the time, it was a radical departure from the text-based interfaces that dominated most screens.  It has been over 25 years since Windows 3.0, the first point that people started really paying attention to Windows.  Suddenly, there was a reason for people to pay attention.  Multitasking was important and something that DOS didn’t do.  However, Windows had to fight off the perception that it was for games to find its footing as a useful productivity tool.Fast forward to today, when virtual and augmented reality solutions are making fun games because of platforms like Oculus Rift and Pokémon Go.  Games have thrust these technology solutions into the consciousness of individuals and business leaders who wonder how they can be used for productivity instead of entertainment.  It’s up to today’s corporate developers to take the technologies and make them productive.The Learning CurveLike the learning curve for Windows decades ago, the learning curve for virtual and augmented reality isn’t shallow – but it’s a learning curve that corporate developers can overcome.  While most corporate developers could, historically, safely ignore threading and performance concerns in their corporate applications, that is no longer the case.  The need for real-time feedback creates a need to defer processing and focus on the interaction with the user.  This means learning, improving your learning, or relearning how to manage threads in your applications.It also means looking for optimal processing strategies that most developers haven’t seen since their computer science textbooks.  With Moore’s Law creating massive processing capacity in both central processing capabilities as well as graphics capabilities, it’s been some time since most developers have needed to be concerned with which strategy was the fastest.  However, as these platforms emerge, it’s necessary to revisit the quest for optimal processing strategies – including the deferral of work into background threads. More challenging than development-oriented tasks may be the need to develop models in three-dimensional space.  Most developers eventually got decent with image editors to create quick icons that designers could later replace.  However, building 3D models is different.  It means a different set of tooling and a different way of thinking.The ApplicationsMost corporate developers were relegated to working on applications that were far removed from the reality of the day-to-day business.  Recording the transactions, scanning the forms, tracking customer interactions… all were important, but disconnected from making the product, servicing the customer, or getting the goods to the end user.  VR and AR are changing that.  Instead of living in a world that’s disconnected from how the user does their work, VR and AR are integrating how users do their work and how they learn.In the corporate world, VR applications include training with materials that are too expensive or dangerous to work with in reality – and the remote management of robots and drones that do the work that is too difficult for a human to do.  Instead of controlling electrons in a computer, VR software is moving atoms or rewiring human brains.  Training is no longer boring videos of someone else doing something, it’s an interactive simulation that used to be too expensive to build.  The opportunity to remotely control through VR provides the benefits of human judgement with the safety of not exposing humans to dangerous conditions.AR can augment humans.  Instead of having to memorize reams of content, it can be displayed in-context.  Knowledge management systems have traditionally been boring repositories of information that’s difficult to access.  AR connects the knowledge repository with its use.AR also makes accessible to humans sensors that are beyond our five senses.  AR can bring thermal imaging, acoustic monitoring, and other sensors into our range of perception through visual or auditory cues.  Consider how digital photography transformed the photography industry.  Now everyone can get immediate feedback and can make adjustments instead of having to wait for the development process.The ChangeUltimately, VR and AR mean that developers get the chance to have a greater and more tangible impact on the world around them.  They can be a part of augmenting human capacity, reducing the risk to humans, and to improve training.  All it takes is a refocus on threading, performance, and learning a bit about 3D modeling.last_img read more

Read More