News-210430

30 Apr 2021

Algorithms - particularly in the context of AI - have been coming in for a lot of negative attention lately. Maybe it's time for people to take a less hands-on approach in their creation.


As long as we have computers we will always have algorithms. This is because an algorithm is simply the definition of the steps a computer must carry out to perform a given task. As such every piece of code represents an algorithm.


Some people also use the term algorithm to describe AI systems that are produced through approaches like deep learning. This usage of the term is also reasonable since any such system is functionally equivalent to a sufficiently large and complex piece of conventional software.


The concerns that have been voiced about algorithms include the potential for various forms of bias and misuse - either accidental or deliberate. Fundamentally these concerns either relate to defects in how software is constructed or simply the intended purpose for which it is created and used. The common factor is the role that people play in creating and using software. Lack of transparency - particularly with machine learning - can also exacerbate these problems.


Software itself does not introduce bias or instigate misuse - people do. At present there are various proposals to regulate AI or even to ban some types of software altogether. That will be up to the law-makers in their various jurisdictions. But none of these proposals really address the root cause of these problems.


Currently the vast majority of algorithms that are in use are produced by people - or more precisely by software developers. People who choose to become coders are often driven by a fascination with technology. This means they are naturally more interested in how a problem is solved rather than in understanding the problem itself. How a problem is solved is the algorithm. This developer fixation with algorithms can lead to all sorts of problems with requirements and even technology.


There are many algorithms that can be applied to solve a particular problem and in some cases like sorting or compression there is a wide range of existing approaches that can be applied. Yet choosing or designing the right algorithm for a given problem can be difficult and it may be that a single strategy is not suitable for all operational conditions. While developers are certainly interested in the details of how an algorithm works everyone else couldn't care less so long as it operates correctly, is cost effective and satisfies the relevant non-functional requirements.


Most inherent problems with algorithms are ultimately due to insufficient attention to requirements and/or inadequate testing. People make mistakes and the more manual steps there are in a process the greater the opportunity for miscommunication, misunderstanding, assumptions, omission and even mischief


The only way to really address these problems is through automation, to reduce the involvement of people in the coding stage of software development. This long standing goal of computer science is finally achievable as we can now generate code directly from requirements using Zoea - an AI that can produce software directly from a set of test cases. Effectively Zoea is a computer system that can create new algorithms. 


Algorithms aren't ever going to go away. Instead we can delegate most of the responsibility for their creation and maintenance to a machine. 


Being able to produce software directly from a specification opens up software development to many more people without them having to learn complex programming languages. It also means that people can put more effort into getting the requirements right while also making software less expensive to produce. This should lead to less defects but in the longer term it also has the potential to address broader issues such as security and data protection. Safeguards that prevent the rise of Skynet will probably have to be baked into the silicon.