All this talk about chatgpt replacing programmers and skill atrophy - meanwhile all I'm getting out of chatgpt is bullshit and hallucinations. Copilot is amazing at boilerplate, but that's about it - I don't even read any suggestions that don't fall into that category anymore.
Copilot is amazing because it lets me stay in the flow when I need to churn out stuff (when I already know what I want). I would pay over 100$/month for a faster/less jittery copilot.
ChatGPT is cheap at 20$/month but not even worth that price.
I think there's two camps of think-piece authors emerging - the ones who've tried a bunch of different examples of things and got passable-to-great-looking results; and the ones going deeper into specific areas and hitting the wall in terms of expertise and specificity. Using the GPT 4 API, I'm definitely often hitting limitations, especially around depth of information, and having to "prompt engineer" my way around them. After using a dozen prompt variants to try to prod it in the direction I want without seeing it reflect those changes, a bit of the magic wears off.
I'm bearish on the idea of long-term prompt engineering being a big skillset since I imagine the "understanding the prompt" side of the tools will get better, but I don't see it necessarily getting around the need for specificity of input. It feels like writing a task ticket and giving it to a junior person - what you get back might not be what you need, and a lot of time the true difficulty is knowing exactly what you need up front. Reducing that cycle time is wonderful, but doesn't replace the hard earned skills of knowing what to make.
I am in the camp of people who see the current limitations but also see the rate of progress and think those limits may not stand for long. It is yet unclear if it will progress like autonomous driving or like playing go …
Prompt engineer won't go away, it'll get more "engineer"-like. Knowing how to describe a point in a model's latent space for the generation you want is here to stay, but the black magic and art aspect of it will go away.
For example, in stable diffusion land, lots of people have intuition about the relationship between certain prompts and the output they produce. That intuition is embedding and training data specific, so it's not really transferrable (even to different fine tuned models for stable diffusion 1.5). However, I use clip interrogation to map the portions of the latent that my prompt is pointing to, evaluate the embedding text to find desirable/undesirable elements, then adjust the prompt or add negative prompts to navigate my generations towards what I want.
Prompt engineering is merely the entry-drug to AI-wrangling.
SD has gotten to the point that someone can fine tune a model (LORAs) with 2 days of time and $2 of GPU time.
There'll be roles for AI wranglers in every large company, where you'll be gathering the dataset and building LORA plugins for the AI to adapt specifically for your codebase/customerbase/documentation etc.
There's also processes involved in building APIs for the AI (AIPI?) to use and interface with your documentation and systems, setting up vector databases, monitoring AI output etc.
People who think there won't be job for expert AI users are just coping. Thinking "haha AI will kill your job too". The steam engine was more powerful than 100 men. In the end it required like 30 people up and down the value chain to support the engines, from coal mining, to coal shoving, to maintenance, to manufacturing.
I'm not sure most codebases are unique enough for that. There will certainly be some of that at places that are doing new things, but for the average online service backend or frontend app programming tasks, I think things like Copilot will see enough and get trained well enough out of the box to be pretty one-size-fits-all.
There will be a lot of business pressure towards using the "good enough" out of the box ones too. If you've got a team of less than a hundred people, rolling your own "datasets, LORA plugins, APIs for AI, vector databases, monitoring, etc" is a multi-person team and significant chunk of new expense. So is the incremental gain their for small to medium teams with relatively "standard" problems?
Kinda like self-hosting at that scale vs using a cloud vendor.
I agree with this, for me scenarios where I know what I want are better handled by copilot - I'm way better at writing code than gpt prompts - copilot then picks up the boilerplate as I go along - and since I know what I want it's easy to fact check.
There are some scenarios where it would be useful to have chat like interface in editor to prototype fast - hopefully copilot x delivers.
I think there are two camps of authors using. The ones who tried a bunch of different examples and got terrible results and wrote the whole thing off, and the ones who pushed on through that, kept exploring the capabilities of the model and couldn't believe how useful it could be once they figured out how best to use it.
I'm surprised with this response. I myself have found it extremely useful and ChatGPT has saved me tons of time with programming and non-programming tasks.
> all I'm getting out of chatgpt is bullshit and hallucinations
> ChatGPT is cheap at 20$/month but not even worth that price.
This is so general obviously it's not true. It's providing lots of value to lots of people. To me this sounds like someone with the goal of confirming their own biases.
What's my bias ? I want this to work - I want to expend less effort to do my job - so far the problems where ChatGPT would fit in the workflow it's taken me more time to fact check the plausible bullshit it generates than doing it the old fashioned way.
Copilot is way better at generating boilerplate.
The one task I did find it useful was converting model types to open API spec - out of trying to use it for a month.
Your bias is that you want it to do your job, but extrapolating that since it's no good at that (for whatever reason), that it's no good for anything and forgetting that there are many other jobs out there.
I agree with your general sentiment and have been slumming on r/singularity lately where there has been a ton of hype. One thing that I think I've gleaned from my reading the comments there, is that people who aren't as skilled at using search engines find ChatGPT to be magical.
To my way of thinking, crafting the perfect prompt is about the same, or more, effort than crafting the perfect Google search. In both cases I'll probably have to double check the sources if I want a critical analysis of the results.
It might be worth considering, for those that GPT is helping a lot, what are you using it for? And for those that GPT is not helping, what are you trying to do with it?
"Programming" is a pretty broad activity description. I can readily imagine that AI tools, trained on publicly-available data, would be more helpful with, say, Wordpress plugins than with flight control systems.
Yup. I always wonder why is my experience not like others? Are those PR people for Microsoft?
Example:
I gave chatgpt a list. Which looked like
st street
av avenue
Convert this to yaml format as
st:
name:
street
And so on.
It failed spectacularly. Not even once but about 10 times. Even if it succeeded, it kept changing the output by doing ops which I never mentioned in the prompt (like reordering and merging duplicated values to a single key)
> Even if it succeeded, it kept changing the output by doing ops which I never mentioned in the prompt (like reordering and merging duplicated values to a single key)
That's something I don't see mentioned enough; if you change the input to a LLM, that may potentially change the probabilities of all the output tokens. Most of us would be surprised if we told a junior developer to fix a bug in a specific module, and they submitted a PR which modified literally every file in the source tree, but that's entirely plausible with a LLM. Asking it to "fix" one thing may change/break completely unrelated things.
It’s a yaml key value format using a map. And even if it nailed the format, it kept changing the output on it’s own. I mentioned not to change order or remove duplicates, it kept doing that anyway. I gave it 100 elements, it kept giving me around 80. And yeah, it was GPT 3.5.
Doubt it, I've tried prompts where a simple Google query would lead to an answer to see if it could save me time (eg. query about AWS service usage, some azure stuff). In every instance I got a plausible sounding solution to my problem that would directly contradict documentation, it invents capabilities/features and misleads.
I've tried using it for code review on a few functions and tasked it to improve provided code - every time it would write worse code eg. I had some logic that would filter to a new list and then append replacement - it's refactor did filter -> add or replace for already filtered items, the reasoning was bullshit : fake performance claims about avoiding allocation when the "allocation" in question was value type, and the suggested alternative was replacing a vector with a hash map which is both logically wrong because of losing order, and slower for the use case.
For generating small stuff like a regex the pain you have to go through to get a correct prompt is higher than writing the thing and you still need to double check it.
I see no use case where chatgpt would improve my workflow in current stage and I've seen so many idiotic bugs recently when pressing the devs that introduced them it's basically "ChatGPT".
The one time it was useful was when I had to convert a model definition to open API spec - was easy to fact check and give feedback to get a decent solution.
4 and full agree and still confusedly wondering.. I know I will be shouted down, but what are all you "tech guys" doing in your daily biz, just glueing hyperdocumented boilerplate together and spitting out the same textbook examples, I don't get it.
That's my impression too, for every engineer with master or more you have 10 developers who just got out of a 3 weeks Javascript boot camp
Of course they think Chatgpt is a revolution and will replace developers, that's because they don't see the bigger picture.
If you work on anything remotely hard you already know coding is like 10% of the job and out of this only 10% is trivial and this is the only part got will get right
straight up don't understand the dialogue around chatGPT replacing tech jobs lmao.
Ok can chatgpt understand the super ambiguous requirement demanded by the customer, translate it into something meaningful to implement, anticipate what the customer actually wants (or will need in the future) and make sure the implementation meets that nuanced complexity? doubt it.
Parts of our job that require skill and parts of our job that require time are not the same.
"understand the super ambiguous requirement demanded by the customer, translate it into something meaningful to implement, anticipate what the customer actually wants (or will need in the future) and make sure the implementation meets that nuanced complexity" takes skill, but that is NOT what takes up most of the workday, implementation does - and if a tool saves some meaningful time on the implementation part, then the same project can be done in the same time with less people, i.e. replacing some jobs.
> that is NOT what takes up most of the workday, implementation does - and if a tool saves some meaningful time on the implementation part
Sorry no, completely untrue here, its about debugging and inter system complexity and getting stuff like debug artifacts together, using various tools, debugger, sniffers here, trace analyzers there, having clue and figuring out the bug, and then fixing it, but please not the surface quick fix, but understanding the root cause (though ChatGPT couldn't even do the first thing well even if guided to most of these I guess, unless trained on multiple 100k to million loc code bases, which would not happen for other reasons)...
Pure implementation is the easy part (even if actually hard) and not taking up the workday, I'd wished it would more often..
It can help on those fun tasks like doing a visualization of some data for these things sometimes.. but there it is 50% great, other 50% I would have better used google skills and directly headed to docs or Stackoverflow where I can judge answers better, or transfer them to my problem more easy.
I personally doubt even ChatGPT10 will be able to do all these various tasks and reason between them...and even if, how much computing power should be there for how many tech people world-wide? I wonder I never read about scaling and limits..
I'm using 4 and I still am constantly having to babysit it and challenge it to get a working result out of it. Don't get me wrong it is absolutely saving me time, but it is very much like being the teacher of a very fast typing junior dev.
Much like the evolving layers of abstraction in technology, the necessity for individuals to master the intricate, underlying layers wanes as time progresses. The article brings to mind the analogy of Assembly programming; while there are undoubtedly experts in this domain whose contributions are indispensable, the majority of programmers can comfortably rely on higher-level abstractions without delving into the complexities of Assembly. As AI continues to advance, general-purpose programming appears to be following a similar trajectory. Individuals have the choice to either become one of the few specialists at that level or to embrace and harness the emerging abstractions, thereby unlocking greater efficiency and innovation in their endeavors. It is crucial to strike a balance, recognizing the importance of both mastery and leveraging AI to optimize our creative potential.
> while there are undoubtedly experts in this domain whose contributions are indispensable, the majority of programmers can comfortably rely on higher-level abstractions without delving into the complexities of Assembly.
True. You don't need to be an assembly expert to be a great programmer, but I do think you need to have a solid understanding of how computers work all the way down to the CPU level in order to be a great programmer. And once you have that knowledge, assembly isn't a big hurdle anyway.
I don't think any individual needs to be an assembly expert, but as a community developers still need some people to be assembly experts.
Our systems will rot without people to maintain them, the AI aren't infallable yet.
The greater worry isn't that the general user will let mostly defunct skills rot, it's that we need a path to raise the next generation of experts in these niche skills.
Oh, do not you worry, that path is called SNES emulators and teen crackers reverse engineering new releases' product activation loops. Curious teens are a staple of civilization.
You’ve never learned about it in university? It’s still a semi important concept to learn how a compiler is kind of rewriting your code and optimizing things. In rare occasions it can mess things up when you use optimization options. Even in rare iOS and Android development cases and LLVM. Not exactly assembly but similar logic. I think that goes to the original point, if you are just trying to make something? Sure JavaScript is enough, but I don’t think that’s a good mindset to have if you are trying to be an expert in computing/computer science.
Sorry I didn’t mean it that way. What I mean to say is that even in cases of iOS and Android, there are edge cases where assembly knowledge does help. While LLVM isn’t exactly assembly it’s similar in logic. It’s not to say “computer science” more so, than it is to say it’s still a decently good topic to at least know and be familiar with when you get the build button on a native app even. Granted these are rare edge cases. It was similar to someone in this thread mentioned street navigation and how we don’t care about it anymore. I do… there will be a time and place in my life that a GPS will not be in my pocket, so it’s still important to familiarize how to physically navigate the world. As we abstract more and more away from our workflows, it’s still a good mentality to familiarize your self with the old tools. It’s why math is still important for programmers ti know! You won’t need a matrix multiplication anytime soon, but it helps if you are suddenly doing graphics and you want to know how something is working (or AI…)
So I’m not trying to be difficult, but can you give an example of how it could help with some sort of iOS task? Because I really don’t know and I’m curious.
I don't think you are being difficult at all! Also, in fairness I've only ran into it once that I can remember off the top of my head but there was a problem with an optimization flag settings. A crash would only happen with a certain higher level optimization flag (you can also get linker issues but that is not really the issue here). When you get a compiler bug, or it feels like there is no way your source code could be wrong, sometimes it can be at a level that looking at the disassembly code can be inspected compared it with your written source code. In 10 years of doing iOS stuff I've only ever had to do it once. An edge case of an edge case, but I am glad that I could have run into that problem.
Yeah, and you can thank those abstractions that today, we have Microsoft bragging that they got the Teams chat client down to only 9 seconds loading time on a machine with several multi-GHz cores and gigabytes of RAM. But that's all fine I guess, because it's good enough.
Good enough for what? Good enough that people will grudgingly use it because instantly quitting their job over it would be overblown. Their life will only become a little bit worse through it.
I'm frankly scared how AI-created "good enough" software will look.
that's the thing: it's not C#, it's chromium+react.
They can't be bothered to write a truly native app for their own platform. Heck, they can't even be bothered to dogfood their in-house react alternatives.
All of that for an app that's used by hundreds of millions of people every day. It's those things that make me think Casey Muratori really is onto something.
Thank you for clarifying, and I recognize that React does lots of extra work because “it’s good enough” but is that really the limiting factor of teams? Slack is built on electron which we can chide for being resource inefficient as well.
I think about it in the same terms. We make better and better abstractions which makes it easier and easier to code. Usually with some performance loss that better hardware can cover.
The implication of that is this will have a similar impact as previous abstractions.
But two questions:
1. Does this scale?
2. Is the type of difference between Python and Assembly the same as between Python and AI Codegen?
just makes me want to combine compilers with understanding of LLMs and how chatGPT does what it does to make real next-generation (but really next level) programing languages.
LLMs will make distinctions like functional, dynamic, procedural, ... become obsolete just like all assembly languages are now just assembly, but in fact used to vary by architecture (and even by each architecture's generation/version)
assembly -> C -> something made with a compiler which embeds chatGTP
In a way, people who are concerned about skills atrophying are repeating the cycle of previous generations, shaking their fist at higher level programming languages or drawing digital images. How dare you not keep the skills of those who came before you! Time will not be so kind to this view.
Some skills will atrophy, as article mentions we don't really care about road navigation anymore. But this is not a bad thing, we can focus our concerns elsewhere. Do you care exactly how your home is heated? Do you know how to maintain every part of your car? Do you spend every weekend keeping your skill on sewing up to date? Some of you might do all of these, but the point is we have let so many skills become things relegated to experts/tech that once used to be common.
It's this relegation that has allowed experts to become even deeper experts in their niche. So I don't worry about skills atrophying at all. There will always be a new crop of nerds (affectionately) to obsess over whatever niche can exist.
Its not a bad thing until its a bad thing. There are certainly computer related tasks I am happy to relegate to AI, my concern is that we are already bad at maintaining software made by humans. If old school software engineering becomes an arcane art as everyone becomes prompt engineers and software starts to explode in complexity we better hope AI learns to maintain it too.
Maybe it'll be like hard goods; a blender you buy today is made of moulded plastic and lasts 5 years, instead of being metal and made to last 20 years. For 1% of the price.
Funny you mention old school software engineering, because it has become an arcane art! We do not feed punch cards into our computers anymore. Should also mention I don't believe "prompt engineer" will be a serious long term role, that task will just be incorporated into everyone's roles instead.
IMO maintaining software will become easier, so much of what we do today with maintenance is really just tedious. It's tedious to keep packages updated, to write documentation every update, to keep the community informed on what's going to happen in the next. It's tedious to rewrite the exact same functionality but in a modern language. List goes on, so much tedious busywork. What if these things were handled for core maintainers, so they can allocate funds to more complex problems?
> In a way, people who are concerned about skills atrophying are repeating the cycle of previous generations, shaking their fist at higher level programming languages or drawing digital images.
Maybe some.
But I resonate with the title because I noticed I started leaning on ChatGPT to avoid having to think through things. I found myself tweaking ChatGPT prompts and hoping for a better response when the previous one wasn’t good enough. Often a better response is not found, then I have to struggle to start thinking for myself.
Over time that leads to less repetition thinking through problems and more difficultly thinking through problems when you can’t use ChatGPT. Is that a good trade off?
I feel the same way I think. These days, I am happy when I am able to use GPT4 to complete a task faster or easier, but am also happy when I write some code "by myself".
However, as far as the conclusion of the article. This "age" started only a few months ago. If you are going to call it an age, then the conclusion doesn't hold up. In the long term, programming "by hand" is likely to be similar to wood carving today. It will be an artistic rather than utilitarian pursuit.
There are already multiple services and tools being built with the purpose of writing, deploying, and maintaining software in a 100% automated way using GPT3.5 and GPT4 and natural language specifications. I am building one of them.
We can't assume that these recent efforts to freeze the progress of AI will be successful. So we should anticipate very significant improvements in performance over the next few years.
Very shortly, everyone will realize that it is quite a huge waste of time and money to wait for a person to write code.
Funny to hear that. I'm closer to 60 than 50, and it never occurs to me to use GPT to write my code. So I wonder if it's partly based on how long someone has been doing something. In retrospect, I never copied code from StackOverflow either (have used it occasionally to get ideas, but mostly felt the code I'd find there just wasn't very good, or at least was too different from my personal coding style to be something I'd want to use verbatim.
In the end I agree with you though -- if GPT code is "good enough" then paying people to write code will soon be looked at like using horse-drawn wagons to get from place to place.
I'm 45. Programming to some degree for 37 years. Same thing I always ask -- are you using GPT4 or 3.5? Can be a very big difference.
I think it really depends on what kind of programming you are doing as much as something like age. If you are trying to solve new problems in a new programming language or framework, then it will make more sense to get machine help than if you have a lot of experience in similar problems with the same language and libraries.
But that equation changes with GPT4 and especially beyond. Eventually you will probably try GPT4/5/ whatever and start using it.
I'm half your age and don't really find ChatGPT useful for my programming work. Maybe I'm not creative enough, but it often seems like more work to get ChatGPT to understand my problem than it is to solve the problem.
Well.. lol.. guess I am not a modern web developer then because until GPT came out I wrote all of my HTML by hand, at least for the last several years. Before that I had phases with ASP/ASP.NET , WPF, and PHP.
Freedom? You've got a weird definition of "freedom" if you think surrendering your autonomy to whoever happens to be running the services you rely on is "freedom."
The problem is that long term, you won't have any idea if what it's spitting out is reasonable or not. We already have more than enough problems with systems being too complex to understand. Solving that with "more complexity we can't understand" doesn't seem a great solution, personally.
We already have more than enough problems with systems being too complex to understand.
Indeed, aligns with my own thoughts on the impossibility of AI alignment.
"However, each new layer and new tech that we add becomes a greater separation between what we can perceive and the complexity of the machine that becomes ever more incomprehensible. So we build ever more tech to help us understand the existing tech with each part being another component of complexity and potential problems."
> You can’t, so you just trust. And let go of the complexity, you don’t need to consciously manage it and you are still ok.
Or I can read a variety of sources from across the political spectrum to look for common elements, and I can try to avoid news articles and such in favor of longer form magazine articles written after the fact, and I can, and do, prefer to read "multiple books" on a topic instead of a simple article. I tell people to recommend three books on a topic they think I should learn, instead of an hour long video. By the time I've read a few books, I have a sense of what the authors agree and disagree on, and enough material to have a useful framework to try and hang the rest of the information I receive on.
As far as complexity... my career is literally "dealing with the complexity of modern computers," from a variety of angles - so I can explain, in long form detail, just how badly broken the assumptions we put on computers are.
And I use them to, fundamentally, do the same stuff I used a 486 for a couple decades ago. Write code, use basic webpages, talk to people, listen to music. The details have changed, but the category of "The tasks I use it for" really hasn't changed. We just have a couple orders of magnitude more CPU performance, RAM, disk, etc... to do the same things. Nobody blinks about a modern chat app being hundreds of megabytes, requiring a gig or two of RAM. Yet it's still used to send text to other people.
I'm sorry, I reject your "Just relax and let the algorithms sweep over me!" approach to dealing with all this. That guarantees that I will feed myself with whatever is the most profitable to someone else, think that which is most profitable to someone else, etc. And that's not a way to live life.
"The thing I’m most excited about in our weird new AI-enhanced reality is the way it allows me to be more ambitious with my projects. In the past I’ve had plenty of ideas for projects which I’ve ruled out because they would take a day—or days—of work to get to a point where they’re useful. I have enough other stuff to build already!"
If these projects are somehow meaningful to you on a personal level or in a small circle, great. However, let's not look past the fact that pretty much every human will achieve this super productivity. Hence, in a broader sense most projects will not be interesting, competitive...or more likely...looked at because of sheer abundance.
The other thought I'm having is regarding the rate of change in mastering AI. It's frankly inhumane. You can work in tech and thrive on a tech stack for some 5 years, sometimes 10. Working the AI...whatever you learn is outdated the next week.
I am an existentialist: I am perfectly comfortable in a world with no inherent meaning. Sheer existence and action brings me joy. I spend hours of my day tinkering with computers not because of utility, but for the pleasure it brings.
I think if we could elicit this persons thoughts thoroughly we would find "meaning" underlying their perceived joy.
You can elicit my thoughts to your heart’s content. Of course there’s meaning. That’s different from inherent meaning (imbued into our world via a god or something).
But that is not what the author stated. They didn't allude to a god like meaning. They cited utility as an example of meaning for which they had no association.
> With ChatGPT, it’s too easy to implement ideas without understanding the underlying concepts or even what the individual lines in a program do. Doing so for unfamiliar tasks means failing to learn something new, while doing so for known tasks leads to skill atrophy.
Imagine this: A hypothetical "GPT 7" tech can effortlessly create a starship capable of shuttling you from Earth to Mars. It's so reliable that after 100 uses, it never fails. Perfect flights, precise landings, and zero issues. All it takes is a simple command prompt "design a starship to Mars". In this scenario, is there a need to learn the intricacies of aerodynamics, gravitational forces, or rocket science?
The notion here is to liberate ourselves from limitations and focus on what truly interests us. That's the end game.
Someone has to. The "humanity designed tech that it became reliant on and forgot how to reimplement, so when it broke down nobody could fix it" is a recurring scifi trope.
I asked GPT-4 for the Imagemagick command to make the white parts of an image, semi-transparent.
It generated a command that made the fuzzy white parts [+1 on fuzzy] fully transparent [bad].
I told it that the result is not semi-transparent.
It apologized and gave me another command that produced a blank image. In another case a grayish image.
I told it this is not what I wanted, and it just looped here saying I'm sorry and giving me one of these above solutions.
As a matter of fact, this looping back and forth between half-working and non-working solutions is something that I've experienced every time when the first result was not what I asked...
Aside from the possibility of "emerging intelligence", I don't think this is the way to the AGI.
> Aside from the possibility of "emerging intelligence", I don't think this is the way to the AGI.
My intuition is that this is the only way to create AGI. I don't think anyone is ever going to carefully intentionally construct an AGI, it's almost certainly going to emerge from something conceptually fairly simple.
I don't think it's impossible that our own brains are also basically just a big statistical prediction model too. Maybe AGI just requires our models to be 10/100/1000x as good. Or our training data needs to be broader in a qualitative way rather than a quantitative way that we haven't quite worked out yet.
I would even be surprised if in 10 years an AI wouldn't be able to decide on the Riemann hypothesis given enough compute.
The rate of progress made in the last 10 years has been enormous, but blanks in comparison to the acceleration of the last year. Unless there are yet unknown limits to our current methods, there does not seem anything to stop us from building machines that outperform the field of human mathematics.
I could sketch you a couple of paths there if we manage to leverage current LLM to become self-improving. But even if we don't manage to do that, there are paths to leverage LLM's to solve mathematics. I can outline truly remarkable approaches, which this comment is too small to contain.
Imagine you’ve only been a programmer for 2 years. Imagine not knowing what imagemagick is. Now get an answer that gets you 99% of the way to where you want to be. Now look at the documentation to see why the parameters aren’t doing what you want them to do. You just saved hours of work.
Yes, GPT and LLMs got to be the most significant leap in the field of AI from 2010s onwards...It kinda "solves" NLP. It is the best human-language-based UI that humanity has every created.
But the point is, this thing does not understand what it's doing...it's become a cliche but the people who coined the term "statistical parrot" really knew what's up.
Like, I think there could be some GOFAI technique that may solve "figuring out how to use a tool like imagemagick/ffmpeg" mathematically, formally and deterministically.
And again, I'm all basing my view on the fact that this "emerging intelligence" is a mirage.
I’ve been using AI to teach me concepts, and it’s great at it. It can sometimes be wrong, so having familiarity with the topic is important in letting it teach you. But OpenAI knows programming languages really well. It’s been amazing at teaching me concepts. Then I go test it to verify its teachings. It is certainly making me more productive at learning.
I feel like, if anything, AI will push people to upskill. A) it's obviously necessary, B) you will increasingly find yourself increasingly identifying edge cases, C) if you're in an exposed field (e.g. diagnostic medicine) you will find fewer people coming into the field, so there won't be a lot of backfills in the workforce.
chatgpt is hit or miss but ive had success using it either way. The hard part will be not relying too much on it. We already know that some devs wont read error messages and instead will just jump into debugging, i trust thisll only get moreso with the advent of chatgpt. However, this might actually save those devs, assuming chatgpt answers correctly.
once you can upload images and ask it to answer based on that, itll be even more wild.
Copilot is amazing because it lets me stay in the flow when I need to churn out stuff (when I already know what I want). I would pay over 100$/month for a faster/less jittery copilot.
ChatGPT is cheap at 20$/month but not even worth that price.