OpenAI spends about $700,000 a day, just to keep ChatGPT going. The cost does not include other AI products like GPT-4 and DALL-E2. Right now, it is pulling through only because of Microsoft's $10 billion funding
it honestly seems better suited for those tasks because it really doesn’t need to know anything that you’d have to tell it otherwise.
The code is already there, so it can get literally all the info that it needs, and it is quite good at grasping what the function does, even if sometimes it lacks the context of the why. But that’s not relevant for unit tests, and for documentation that’s where the user comes in. It’s also why it’s called copilot, you still make the decisions.
it honestly seems better suited for those tasks because it really doesn’t need to know anything that you’d have to tell it otherwise.
The code is already there, so it can get literally all the info that it needs, and it is quite good at grasping what the function does, even if sometimes it lacks the context of the why. But that’s not relevant for unit tests, and for documentation that’s where the user comes in. It’s also why it’s called copilot, you still make the decisions.