> With ChatGPT, it’s too easy to implement ideas without understanding the underlying concepts or even what the individual lines in a program do. Doing so for unfamiliar tasks means failing to learn something new, while doing so for known tasks leads to skill atrophy.
Imagine this: A hypothetical "GPT 7" tech can effortlessly create a starship capable of shuttling you from Earth to Mars. It's so reliable that after 100 uses, it never fails. Perfect flights, precise landings, and zero issues. All it takes is a simple command prompt "design a starship to Mars". In this scenario, is there a need to learn the intricacies of aerodynamics, gravitational forces, or rocket science?
The notion here is to liberate ourselves from limitations and focus on what truly interests us. That's the end game.
Someone has to. The "humanity designed tech that it became reliant on and forgot how to reimplement, so when it broke down nobody could fix it" is a recurring scifi trope.
Imagine this: A hypothetical "GPT 7" tech can effortlessly create a starship capable of shuttling you from Earth to Mars. It's so reliable that after 100 uses, it never fails. Perfect flights, precise landings, and zero issues. All it takes is a simple command prompt "design a starship to Mars". In this scenario, is there a need to learn the intricacies of aerodynamics, gravitational forces, or rocket science?
The notion here is to liberate ourselves from limitations and focus on what truly interests us. That's the end game.