social icon social icon social icon social icon social icon

ChatGPT in IT sector

The path that artificial intelligence has travelled is truly impressive. It was born as a concept, only to be portrayed in pop culture as a scary thing – after all, the very first presentation of AI in a film (Metropolis, 1927) took place in a dystopian vision of the future, and many of us remember what resulted from placing too much trust in the HAL9000 supercomputer.

The development of artificial intelligence has accelerated over the recent years. No longer a product of our imagination, it is today present in many aspects of our lives. However, as long as its usefulness was reduced to responding to simple commands voiced to virtual assistants such as Siri or Alexa, we seemed to be forgetting that it even existed and underestimating its potential. It was developing quietly until OpenAI suddenly released the ChatGPT service in 2022 and, for some, the world stopped in its tracks.

The elaborate algorithm fed by the vast amount of books, academic papers and random content it had encountered on the internet had learned not only to answer simple questions, but also to write stories and essays. Most importantly, however, GPT 3 is, according to some, the second language model to have passed the Turing Test. This means that you can ‘talk’ to it like you would to a human being.

GPT version 4 was launched in March 2023, featuring even faster performance and even more relevant responses. The information database has been expanded. Can a tool as powerful as this be able to threaten the position of developers on the job market?

AI learns more quickly than humans

At least in the context of merely absorbing data and being able to recall it unchanged when given the proper command. To ensure that an artificial intelligence ‘learns’ one of the popular programming languages, it only needs to be ‘fed’ with the documentation. Learning further frameworks takes just a few moments. For this reason, ChatGPT is already comfortable with writing simple programmes, for example in JavaScript. What is more, unlike humans, the AI algorithm does not enter individual characters but entire lines of code, which is simply faster.

Artificial intelligence is not creative

According to press materials, you can talk to ChatGPT virtually the same way as you would to a human being. The number of texts used to teach the algorithm different languages to a communicative degree is overwhelming, but it is still just raw data. GPT 4 can use them to explain why Albus Dumbledore behaved in a particular way in the fifth part of the Harry Potter series, but cannot create a unique, coherent and fully meaningful story on its own on a given topic.

According to many experts (including Lucjan Suski, the CEO of Surfer), it is precisely this lack of creativity that keeps ChatGPT from being a threat to developers. It can be a useful tool in the hands of an informed user and speed up their work significantly – but very little beyond that. It will write a piece of code in a specific technology, but it will not create a comprehensive solution to a problem that did not appear in the data used for learning. In addition, in spite of its knowledge of multiple programming languages and frameworks, GPT 4 is not very flexible in using several of these tools at the same time.

ChatGPT has (or at least pretends to have) limited knowledge

You can see this most clearly when you ask the algorithm about a pop culture phenomenon, such as a fishing fanatic or a cooker similar to a triumphal arch. ChatGPT will ‘understand’ a text fed directly, but does not understand the broader context.

A similar situation may be encountered by developers who decide to take advantage of artificial intelligence and integrate it into the systems they are currently using. Well, GPT 4 will find out without much trouble what tasks a particular code is performing, but nothing more. It does not know the specifics of the environment it has been introduced into and is therefore unable to integrate easily. After all, it is only a language model, not the architect of the entire IT infrastructure in a given spot. It is very likely that, when asked to implement specific functionalities, it would try to do this in a way that would be optimal in isolated conditions, but would not be right with a real and functioning system.

Debugging and testing

As I have mentioned above, ChatGPT is able to write a slightly complex programme. For example, a tic-tac-toe game in Swift. The problem is that it can make mistakes even with such a (seemingly) simple application. What is more, it frequently displays a different code in response to an identical request. When it is asked to perform a more complex task, namely to create a specific programme with comments, it can crash. This will then receive content that will cause errors during compilation.

The very imperfection of the code presented by ChatGPT means that it cannot be considered a threat to developers at present. Indeed, since the algorithm produces programs with significant errors, this means that it has not detected them. On the other hand, if repeating the request produces not a correction but a rewriting of the code, you may conclude that AI cannot deal with debugging. For small applications, you can try to write the whole thing from scratch many times until you get an acceptable result – but when you are working on large systems, you can no longer afford to do this. A code must function in a specific environment, and in case of any error, you need to make corrections and try again. On top of this, the front-end layer still needs to be tailored to be operated by real people on specific machines. Consequently, a person must still be present to ensure that the interface design is appropriate and that it is accessible to the average user.

ChatGPT 4 does not really understand what you are saying to it

You can get the impression that the algorithm either does not fully understand the code it is generating, or is still having trouble interpreting user requests. This problem was described as early as 1980 by John Searle’s in what is known as the Chinese Room Argument. It shows that, while ChatGPT 4 easily passes the Turing Test due to its great knowledge of linguistics principles and real-time response generation, it still lacks a real understanding of the queries that are fed into it. It is merely an agile manipulator capable of matching some symbols and words with others. You can see this when you ask it to generate code for some algorithm based on multiple conditions, or even ask it to perform several consecutive actions in a single query.

As a result, the operator cannot simply type in what they are expecting, then copy the program into the IDE and compile. It is necessary to divide a larger task into smaller chunks and then combine them together. However, if this working method is to produce satisfactory results, the person using ChatGPT must themselves be familiar with the programming language and framework in which the algorithm writes the code. After all, the AI itself does not really know what it is actually displaying. Simply ask for an explanation of the code that has just been displayed and, with each passing moment, more and more cracks appear in the image of a perfectly smart algorithm.

Will ChatGPT 4 deprive developers of their jobs?

No. Given the extent of the knowledge and skills required of today’s developers, the algorithm will not be able to replace them any time soon, but can only serve as an easy-to-use and fast tool. This does not mean, however, that subsequent versions of ChatGPT, or any other source code-oriented chatbot, will not catch up with real developer teams in the not-too-distant future.

Sources:

https://meetanshi.com/blog/will-chatgpt-replace-developers/

https://pl.wikipedia.org/wiki/Chi%C5%84ski_pok%C3%B3j

GDPR Cookie Consent with Real Cookie Banner