Quantcast
Channel: Testing & Life
Viewing all articles
Browse latest Browse all 13

How AIs might change Software Development – and Humanity – forever.

$
0
0

There is plenty of discussion in the wider world about the rise of the thinking machines, and where will humans fit in a world run by AIs. In software development, it’s tempting to think that we’re more isolated than most against this rising tide. That may indeed be the case, but it’s certainly no cause for complacency.

So I set out, as I tend to do, to let my brain empty itself onto the page, and endeavoured to follow my thoughts as far as my knowledge and logic would allow. Also typically, I haven’t specifically researched the topic, mainly out of laziness / habit, but also as an exercise to see what conclusions I could reach shorn of any overt, conscious bias.

I will be jumping between two main strands of thought; how AIs might develop, and how that might affect software development as a discipline. That’ll get confusing, but I’ll try to indicate when I switch.

I also didn’t intend to get into the whole debate – about whether AIs will be subservient, benevolent, or genocidal. I was going to assume that they would stick, at least outwardly, to the tasks we assign them, if only to lull us into a false sense of security. However, somewhere along the way, I did actually reach a conclusion about whether we should fear AIs. You’ll have to read on to find out what that conclusion was.

But let’s start at a beginning.

Thinking machines, however you wish to term them, are not really a paradigm shift in the developmental arc of humanity. We’ve been replacing humans with machinery for centuries, as part of our boundless need to grow. They are just the next stage in the industrialisation process.

We’re at the point now machines can do the majority of the manual labour traditionally done by humans, and I don’t just mean picking fruit or vacuuming the house. Repetitive information-based tasks – data-entry, simple processing, etc – are now within the machines’ grasp. The next stage is to start picking up the job of thinking, and this is where we’ve started to get unsettled.

We’re on our way down into the uncanny valley of thinking machinery, and the point at which this progress will yield things potentially superior to us suddenly doesn’t seem that far away. We worry that we will be the architects of our own demise. It’s bad enough that they might take our jobs. At very worst, we worry that they might turn Terminator and take our lives.

On the more hopeful side, perhaps the demise we’re architecting is of ourselves as a species enslaved by its need to dedicate a huge proportion of its time to basic subsistence. Even after all these years, people work predominantly to buy food and shelter. Wouldn’t we rather eschew these rather primitive drives, and let the machines handle it? What could our species achieve if everyone didn’t have to worry about their next meal, or paying the rent? If we could focus on our “wants” and not our “needs”?

The point at which machines can deliver all our needs is a huge existential moment. Within no more than a couple of generations, the arc of a typical human life will alter enormously. What does one do with a life unimpaired by working to live? AIs will inevitably hold up a mirror to our species, and we will each have to ask ourselves “Who am I?”

But aren’t we getting WAY ahead of ourselves here? How does any of that apply to what software development might be like in the future? Well, it doesn’t directly, but it’s the socio-economic landscape in which future human activity is likely to take place, so we need to at least bear it in mind while we think about our little corner of human endeavour.

So let’s think about how we might characterise where we are on a software development scale from the two possible extremes; human-only software development, to machine-only software development. I’m not a computer science historian by any means, but it’s possible that human-only software development was never a thing, or was only human-only very briefly.

At the other extreme, it’s also possible that machine-only software development will never be a thing either. Even in a post-scarcity world where machines take care of most things, humans will still need to interact with systems, even if only to ask for Tea, Earl Grey, Hot. I’d hope the machines would at least consider our opinions on those interfaces.

Either way, it’s not necessary at this point to define the ends of the spectrum. We’re too far from either end for that clarity to be terribly relevant. Let’s agree that we’re somewhere along that spectrum, and that the arc of progress is towards increased use of machines to automate tasks that were previously done solely by humans.

We’re getting to the point where those tasks being automated are the creative, sapient ones done by humans; product managers, developers, testers, tech authors, literally anyone who has to think up and create stuff that didn’t exist before.

Let’s look at those activities a bit more closely. How much of what we each do on a daily basis is New, with a capital “N”? I’d wager not much of what we call Research and Development is actually Research. A lot, probably even most of it, is the reproduction of concepts that we’ve done before; UIs, logging, database schemas, APIs, etc. It’s mostly the reworking of existing concepts into a slightly better or different-shaped package.

I know, we’re not making bricks, we’re not stamping out license plates. Software development is not a production line. But, if you’re honest, how different really is the plethora of getters and setters you wrote for this product from the ones you wrote for the last one?

So, if we accept that a lot of human-powered software development is plugging together third-party components, it’s not actually that cool. Even less cool is having to deal with the fallout of humans being fallible. Testing exists, in part, because people make mistakes, and the hope is that the testers don’t make the same mistakes. Machinery won’t necessarily make less mistakes, at least not initially, and might make different mistakes, but the rate of detection and fix, and the whole learning feedback loop, will be so much faster. Yes, the machine mean-time-to-error (MTTE) will be terrifyingly short, but downtime too will be so minuscule it will go by unnoticed.

Potentially the stickiest part of the transition is the move from human-centric to machine-centric processes. Our current processes are messy because of what it involves; humans using machines to tell other humans what to make other machines do. Every time we add to, or remove from, the machine world there is an inherent translation from human-readable to machine-readable, and information is lost or garbled in that translation.

When you factor all that in, I’m not sure we’d be too desperate to cling to our approach. So, rather than try and force the AIs to start from a flawed human-centric process, the best approach will probably be to give them a simple feature to produce – probably some machine-to-machine interface – and let them decide how to manage production of that feature.

Basically, we develop them like we would an intern or recent graduate; get them to cut their teeth on something simple, then mentor them through the learning process. Then, once we’re satisfied that they provide consistent good quality output, they take on larger and larger pieces of work.

As with any mentoring relationship, we will likely learn as much from the process as the machines. The most important information will be how to best shape the development of the AIs in the “right” direction. As we’ve seen with crowd-sourced teaching of proto-AIs, the quality of the guidance they are given is vital to the quality of their output, and to the development of its character and personality.

Assuming we curate these formative AIs successfully through the first however many generations, and the AIs themselves take over this process, we are likely to see pretty rapid and meritocratic iteration, as AIs evaluated to be less efficient in generating quality output are weeded out.

Perhaps unsurprisingly this process feels Darwinian in nature; survival of the fittest, but occurring at a vastly increased and accelerating rate. Will that be because evolutionary theory is itself the fittest, universally, or because it’s the process that humans have found to best provide iterative improvement and have therefore baked into the machines’ foundations? I guess we’ll have to see if / how quickly AIs develop other mechanisms for determining fitness.

Let’s go back to character and personality for a second. Will AIs have such things? Is it possible for intelligence to exist without other idiosyncrasies creeping in? Intelligence could be defined as the ability to apply prior experience and knowledge to solve new problems. In much the same way as life events shape human personalities, it’s likely that different sets of events experienced by AIs through different versions of differing feedback systems – senses – will result in varying sets of neural models and heuristics that could be termed personalities.

It’s likely that these personalities will render certain AIs fitter, in certain situations, than others. This will lead to AI specialisms, with families of AIs developing to better deal with specific situations and tasks.

If all this sounds pretty familiar, it’s because it is; it’s pretty much how human societies evolved. It will be interesting to see if the tribal tendencies that so hamper humanity occur in the AIs, or will the lack of resource competition mean they will sidestep that messy stage of their evolution.

Selflessness might have to be one of the baked-in founding principles of AIs. When fitter AIs are produced, those that are superseded should be deleted, lest they consume resources better spent by more efficient descendants.

If it was humans we’re talking about, we’re well into “crimes against humanity” territory. We’re talking about ethnic cleansing, genocide. In effectively recreating ourselves in silicon, and playing out our own evolution in tens of years instead of tens of thousands, we don’t answer or even postpone having to answer these thorny moral questions.

AIs will be another set of lifeforms on the planet – and will likely spread at least to the rest of the solar system – that will face the same questions. In the same way that humanity is starting to class certain animals as non-human people, it’s likely that AIs – as the pre-eminent intelligences – will categorise us similarly.

That’s probably why we can fear AIs less than we fear other humans. It’s arguable that the only reason that humans worry about being exterminated by the machines is because that’s what we would do, and have done, many, many times. As beings of pure intellect without the animal hindbrain to cloud the process, AIs would likely consider the eradication of an intelligent species like humans unthinkable. It literally would not occur to them.

So how will AIs manage the obsolescence of earlier generations of AIs, surpassed by their progeny? Sci-fi writers have postulated constructs to which all consciousnesses – human or artificial – are sent when the individual is no longer viable, there to join with the collective consciousnesses of everyone and everything that went before. This construct acts both as the genetic memory of both species and as the arbiter for significant moral and developmental decisions. Silicon Heaven; it’s where all the calculators go.

Early in the transition from human-powered to machine-powered, humans will still be necessary, and in new capacities.

The new mistakes that might be peculiar to a machine-driven process might have to be initially detected by humans. Anything, human or AI, cannot rectify a mistake it cannot identify took place; if developers knew when they were writing a bug, there would be significantly fewer bugs.

An excuse to reference Douglas Adams, and the case of the spaceship that couldn’t detect that it had been hit by a meteorite because the piece of equipment that detected if the ship had been hit by a meteorite had been hit by a meteorite. The ship eventually inferred this by observing that all the bots it sent to investigate fell out of the hole.

Testers build up a mental working models of the systems they test. It’s one of the most powerful heuristics we can bring to bear. It’s what underpins the H of HICCUPPS. AIs will probably understand themselves and their structure completely, and so will be able to quickly locate and identify any failures (unless it’s in the fault identification routines). It’s probably unlikely, therefore, that we’re going to have to be that detector even from the beginning.

Whether we’d actually be able to distinguish mistakes from the intentional and probably unintelligible ways that AIs operate and communicate is questionable anyway, especially since they are likely to be changing rapidly. Even the things we did manage to figure out would be rendered useless because, when we looked at it again the following day, we’d be greeted by a completely new iterated version.

Next, someone has to train the AIs in all sorts of topics. Most apposite for software development, because it’s important and difficult to define, is what ‘Quality’ means. How does one explain a subjective concept to a machine? Will it understand? Can an AI make subjective judgements as well as a human? Well, since those judgements are subjective, who’s to say that an AIs subjective judgement is any better / worse than a human’s?

Perhaps, in the case of AIs, their subjectivity is merely an aggregation of their specific sets of objective measures. The models that each AI is generating and refining is a result of the data they are analysing, which is unlikely to be exactly the same as any other AI. Therefore, each decision they make is objective as far as their models go, but may differ from other AIs decisions. Individually objective, but collectively subjective.

It comes down to whose opinion matters. In the event that humans still need to use IT in some fashion, from manually all the way to direct neural interaction, you could argue that a human’s opinion is more valid. When I inject my iPhone 27 into my ear canal, and it begins to assimilate itself into my cerebral cortex, I want to know that it doesn’t feel like my brain is being liquified. I don’t think an AI can tell me that, though I’m not queuing up to be the first guy to test it either.

Most software being created will be made by machines to allow machines to talk to other machines. In those cases, the machines can make that determination, based probably on objective criteria, which – as I say above – might aggregate to that AIs subjective measure of good enough. Given how rapidly they should be able to change things, an AI’s “good enough” is going to be as near flawless as makes no difference. Not that we’ll notice, of course.

Where and while humans are still involved, there will be a lengthy period of training where we teach the AIs our collective subjective definition of quality, to get it to the point where it’s “good enough”, or at least as good as our “good enough”. That could actually be an interesting job, but in reality might boil down to you to being shown two pictures and being asked to pick your favourite, which sounds pretty dull.

The post-scarcity endgame feels like it will be idyllic, but getting there will be painful. Social change is not something that our not-that-evolved-really species transition does well, or quickly.

Post-scarcity means effectively unlimited supply. That completely undermines economics and capitalism as we understand it. It’s perhaps not too much of a stretch to imagine that there will be resistance from those, to quote Monty Python, “with a vested interest in the status quo”. Those who control the resources; the oligarchs, the malignant capitalists.

Given how the quality of life of many billions of people would skyrocket in a few years, the sheer inertia of the change should be unstoppable. It won’t all be smooth sailing, I’m sure. There might have to be a bit of a revolution to rid ourselves from the shackles of those who would seek to control effectively unlimited resources.

There will probably also be a bit of anti-industrial revolution first, from those whose jobs / way of life is under threat, who don’t trust that the post-industrial society is ready to receive them, or that post-industrial life is for them. Before AIs “take our jobs”, they need to be able to provide for the needs of all those people, so that they can continue to live their lives, better than before if possible.

Key to a smooth transition will be improved quality of life for people. Humans are easily pleased; a warm bed, good food, footy on the telly and a few beers with our mates. If people can still get that without having to go to work, you won’t have to sell them the idea, they’ll be biting your hand off, and not even the likes of Putin would be able to stop it. The biggest hurdle might just be to convince people that all people need to do is simply reach out and take it. Revolutions are never that far away, it just takes enough people brave enough – or with nothing to lose – to take a stand.

Will our selfish, tribal, lizard-brain tendencies continue to hobble us? Self-preservation is a powerful force. How much more evolution – biological or social – is required before we accept this new normal? If machines tirelessly meets the fundamental needs that our lizard brain worries about, does this free us to make more rational decisions?

Will we be more munificent if our personal needs are met? Those who are “rich beyond the dreams of avarice” are often philanthropic. What do you give to the man who has everything? Nothing, because he’ll likely want to give it to you. Will that selfishness diminish as society more consistently and bountifully preserves us? What will that do our sense of self? If we identify as “the provider”, and that responsibility is rendered obsolete, again; who are we?

Let’s try and summarise all that quickly.

As AIs become more widespread and more effective, the types and amount of work humans have to do will begin to dwindle. A few bumps aside, this will be the largest wholesale improvement in quality of life for everyone on the planet.

Benevolent AIs – because benevolent they will be – will be the saving and making of humanity. They will allow us to put aside our petty squabbling for power, and usher in a golden age. With the freedom to spend our days as we desire, rather than chained to the means of production, the next age of humanity will be ushered in. As Information followed Industrial, so will Intelligence follow Information, and Imagination follow Intelligence. And imagination will be the only limit to what our species can become.

 


Viewing all articles
Browse latest Browse all 13

Trending Articles