I really don't understand the aversion to AI from people in industry. Everyone from software engineers, artists, IT people, writers, film makers, and so on, seem to be heading for the hills over their fear of AI. I've come across this more frequently as the technology has matured and in conversation with friends and relatives who are in tech or design. Indeed, it's even come up recently at an appointment when my practitioner mentioned her husband (a software engineer), refuses to use AI, and it's like...why?
Especially as a developer, where one would expect to view it from the lens of efficiency, like a more time efficient search algorithm. Why perform a less efficient search for a solution when you have a more efficient one at your disposal that frees your brain power for more useful things than searching—like solving the actual problem. I'm not saying copy and paste the code it produces (which would be foolish), but query it for potential things to consider. And use the algorithms it does generate as inspiration for, or scaffolding of, something you might not have considered; or as a rubber duck to think through a problem.
On the developer front, in addition to viewing it as a rubber duck, I think it's useful to view it as a new programming paradigm. Each decade has seen new developments in how we write code, from procedural to object-oriented to functional and declarative styles, even to the more recent reactive and event-driven styles. In this context, I think the use of AI ought to be considered as another paradigm to add to the list. It's not going to replace the others, but it will augment them, and it will be useful in certain contexts where the others are not. It's a sort of black-box paradigm, where you don't know what's going on inside, but you know what's coming out is useful (pending expert-eye-driven effort of course). It's like a function that takes in a problem and spits out a solution, and you don't care how it got there, you just care that it did. But what's more, it requires a competent, domain-specific person to drive it.
Does this spell the end of the junior developer? No. To the contrary, it lowers the steepness of the curve one must climb to become a proficient, competent developer. And this is not limited to developers. It's useful for artists, writers, musicians, and so on. It's a tool that can be used to augment the creative process, not replace it. You see this at a smaller scale when YouTube started to gain traction. It allowed people to distribute to larger audiences, it broke barriers to entry in otherwise guarded industries, and it allowed novices to learn from the best and otherwise ignored but talented folks across the world.
AI is just the next step in that evolution. But rather than a static mentor you can't interact with, it's a personal one that can help you with your specific problem, and it's available 24/7. I view it as a sort of curated Google that's slightly dumb, or a finally useful implementation of Clippy; the annoying office assistant Microsoft put in their Office products in the early 2000s. And just like Google, it requires a competent person with domain-specific knowledge about the topic one is prompting it, to drive it in any meaningful way towards the useful result one desires.
There's been a lot of recent articles discussing the woes of what's to come. And don't get me wrong, there's merit to many of their ideas:
Are deep fakes a concern?
Is it possibly enhancing the distrust of experts?
Is it corrupting the youth?
Are there privacy and surveillance concerns?
What about cheating in academia and industry?
I think one issue everyone leaves out is: Are we overreacting and possibly creating anticipatory/preventative measures that curb our paranoia in one direction, only to steer its development in a less desirable and wholly unpredictable other direction? I'm not going to explore this too much in this article, but it is reminiscent of the US War on Terror, and all of the terrible "preventative" (often self-serving to special interests) legislation (Patriot Act cough cough) that failed to prevent much of anything, and instead, diminished the rights of many US citizens during the decades that followed.
The bottom line is, no one can predict where this thing is going. Indeed, very few predicted it was coming. I'm not saying it's a waste of time to try to anticipate certain undesirable outcomes. But all too often we're reminded of the failures of experts and lawmakers' ability to do so; especially while seeing their inability to resist the deep pockets of actors looking to take advantage of a panicked group with taxpayer money to spend. We've seen this in the eco-movement, we've seen this in the military industrial complex, in agriculture, in healthcare, in finance. It's a story as old as governments and wealthy powerful market players who have something to sell to equally powerful government actors looking to expand their own interests.
It is true that these tools will fundamentally change the way we developers, artists, writers, multimedia producers, musicians, etc., do our jobs. But I tend to think of it as a labor multiplier, much like the excavator was for the shovel and the automobile for locomotion. History has not been kind to those who resist new technology. Indeed, it is replete with examples of those who resist, miss, and those who are the first to embrace, they will be the pioneers for which the market rewards and for whom the rest of us desperately try to catch up to.
Read my other writings at ohof.one