MARCH 2018CIOAPPLICATIONS.COM9We must consider that our systems and processes, designed to interact with humans, will increasingly be interacting with a hybrid of humans and digital human-agent proxies.· Assumptions of Stability: Many methods, regression in particular, work from the premise that we can look at a stable set of past data to project how some new enhancement will function. The preconditions for such methods center around stability. Our digital environments are slowly becoming intentionally unstable in certain ways. For example, autonomous devices include goal modification modalities if the environment changes in unpredicted ways. These devices do not have the ability to "check in" for new instructions. We should be careful to consider how new environments might impact the sort of testing required to anticipate changes in environments and automated reactions to those changes.· Malfeasance: There are many recent examples of technology and systems "misbehaving" because they were used in unintended ways. Sometimes, these unintended uses are benign, such as when users find that a piece of functionality serves another purpose (e.g. using data from home automation systems do design better security systems). While such unintended use should be considered, there is arguably substantially more risk from unintended malfeasant use. Consider, for example, a system which records data for factory automation in a way that the data can be compromised to reverse-engineer intellectual property. On a more human scale, consider autonomous biomedical devices which can be externally configured in unintended ways. The science of considering use cases for failure and adverse conditions must continue to keep pace with the changing behavior of cyber criminals and others who intend to use systems and processes for their own intent.· System Learning/Making New Mistakes: Development is becoming increasingly agile, and developers are increasingly mobile. There is growing use of shared methods through open source initiatives. There is a huge opportunity for automation testing to increase our ability to sustain captured learnings. For example, in the future, we may better use AI methodologies to predict the types of failure that may emerge in advance, leading to development methodologies that are more anticipatory. This sort of "self-healing" development mindset has been around for some time (for example, instituting knowledge management systems for developers), but new capabilities to capture, retain, and synthesize massive amounts of dynamic data bring about exciting new possibilities. We must learn from our mistakes, but also learn from how we react to failures, better anticipating required shifts in training, methods and tools of the future.We are indeed at the cusp of a new era in technology development. There is an enormous, and increasing expectation placed on the speed and quality of development. The cost of failure is no longer at the system level, because more and more everything is connected to everything. We live in exciting times indeed for those who continue to advance our capabilities to test, to improve, and to proactively influence the march of progress. Technology is better able to predict, advise, and in some cases intervene to make things safer and more useful
<
Page 8 |
Page 10 >