ChatGPT and other natural language models have caused quite a bit of intrigue and anxiety lately. Governments and businesses are increasingly recognizing the role of pre-trained generative transformers (GPTs) in shaping the cybersecurity environment. This article discusses the implications of using his GPT in software development and its potential implications for cybersecurity in the age of artificial intelligence (AI). GPT can improve the efficiency and productivity of programmers, but it is not a replacement for human programmers because programming involves a complex decision-making process that goes beyond just writing code. And while GPT may help find shallow bugs to prevent short-lived vulnerabilities, it is unlikely to change the balance of power between attack and defense in cybersecurity.
Generative Pre-trained Transformers (GPT) are the current technology. From something like GPT-based Chat-GPT and bard programming assistant co-pilot, This modern form of machine learning-based AI is excitement, Astonishment, phone to outlaw or stop development, and social predictions are: Utopia To robot apocalypse.
Many still fear that this technology will disrupt society, more detailed commentator is beginning to spread.As we begin to understand How GPT works and Discussing how to best use them made the discussion more productive and less panicking.
Further discussion should focus on how the GPT can be used in policy areas. GPT is another example of a dual-use technology, beneficial for some applications but concerning for others. As governments influence the global security landscape, many are wondering how the GPT will change the balance of power between attack and defense in cybersecurity.In particular, if GPT Enable vulnerability discovery The rate of exploitation will increase, further swinging the delicate balance of cybersecurity in favor of attackers.
To begin to understand the issues of using GPT, we need to understand how these models work. Models created by GPT are Large statistical models trained in lots of text. Such a model uses existing content to Predict what words will come next. For example, if you ask ChatGPT to talk about Paul Revere, the program will start generating sentences similar to those you would likely find if you were reading part of the training set containing the word “Paul Revere”. Masu. The system is trained on human writing, so the results appear as if they were written by a human.
GPT can generate statistically likely phrases, coding tool. Much of the writing of code is fairly boilerplate, but writing code is only a small part of programming, and we’ll discuss the distinction later. Many tasks require writing a fair amount of boilerplate code. many examples This kind of code already exists in tutorials or on the web. Web accessible repository GitHub (used to train CoPilot), etc. So ChatGPT can create boilerplate code.
You can then inspect the code and modify it. Programmers can also return to ChatGPT to request specific changes to the program. GPT-generated code often lacks some error handling, but has extensive knowledge of various libraries. This is a useful starting point for human programmers.
With this coding ability, some argue that: GPT coming soon replace the programmer, but this incorrectly assumes that programmers only write code. Programmers also have to decide what code they need to write, what variations they need, and how pieces of code will fit together, among other big-picture issues. GPT-based tools have the potential to nurture programmers more efficientBut it doesn’t completely replace the programmer.
GPT-based tools help programmers too debug code. Debugging is the process of finding and removing coding errors. Industry estimates (many of which are outdated, but have become part of industry common sense) state that: Bugs 1 and 25 Every 1,000 lines of code. Given a program like this: microsoft windows have millions of lines of codedebugging is an important feature.
Tools to find and fix bugs are constantly being created and adapted, but debugging remains complex. For many bugs, GPT-based detection tools can help.Many bugs are the programmer’s fault Exclude Some checking code, not recognizing boundary conditions, or mistaking one kind of data entry for another. A buffer overflow is a bug that occurs when a programmer writes beyond the memory allocated for a buffer, allowing an attacker to overwrite adjacent memory locations with his own code to execute arbitrary commands, It is possible to elevate privileges or gain unauthorized access to the system. GPT-based tools can recognize this kind of bug. These bugs are common enough that many examples Can be used to train the model.
A security concern is that attackers can use GPT-based tools to search and exploit bug. Not all bugs are exploitable, but most exploits are bugs (buffer sizes not checked, unsecured network connectionor unencrypted login credentials left in unprotected memory.
Concerns that GPT-based tools change the balance between attack and defense are based on a misconception about software flaws. Not all bugs are the same. Most bugs are shallow bug– Mistakes that are easily recognized, fairly common, and easily repaired. GPT-based tools can find these shallow bugs, but not the deepest bugs in the deepest part of the system design. Serious bugs are difficult to identify or fix and often require extensive research and debugging efforts. for example, Java reflection mechanism There was a serious bug in 2016 that took years to fix. The problem was not caused by a minor code flaw, but by an unexpected interaction between parts of the system (which was otherwise working as expected). Fixing this bug required rethinking the basic system design and the interaction of its components, while ensuring that the changes did not break the rest of the code.
Nevertheless, even shallow bugs can cause serious security flaws. OpenSSL prone to bleeding from the heart Discovered in 2014 was a shallow bug caused by unchecked buffer sizing. This bug silently leaks data to an adversary and is one of the worst vulnerabilities. However, once discovered, the fix was easy, requiring only a few lines of code change. The bug fix did not affect programs using the fixed code. Everything that worked continued to work after the fix.
This is of particular relevance as governments advance their cyberattack and defense strategies. Attackers can use her GPT-based tools to scan code for exploitable flaws, while defenders can also use the same tools to find and remediate the same flaws. When an exploit is actually discovered, GPT-based tools find the flaw in the code that caused the exploit and please help me fix. OpenAI recently launched the following programs: find bugs with its own artificial intelligence system. So even in the absence of these tools, the race between bug exploiters and exterminators remains fairly even. Serious vulnerabilities are not easily found in GPT-based systems.
From a policy maker’s perspective, the emergence and popular use of GPT-based coding tools will not change the security landscape. Using these may help you find some shallow bugs, but their use by defenders may offset the advantage gained by attackers. In fact, GPT-based tools can detect such bugs and can be expected to produce more reliable software.
Policy makers still have many concerns about the emergence of the GPT. These technologies raise questions related to intellectual property, academic integrity, content control, and deepfake detection. These are examples of areas where policies are needed. However, GPT technology won’t change the cybersecurity landscape, so policy makers might want to look elsewhere.
…
Jim Waldo is the Gordon McKay Professor of Computer Science Practice at the Harvard School of Engineering and Applied Sciences, where he teaches courses on distributed systems and privacy. He is the Chief Technology Officer of the Faculty of Engineering and Applied Sciences. He is also a policy professor at Harvard Kennedy School, teaching technology and policy topics.
Angela Wu is a Master of Public Policy student at the Harvard Kennedy School and a research assistant at the Belfer Center for Science and International Affairs. Previously, Angela was a management consultant at McKinsey & Company. She earned her bachelor’s degree from Harvard University.
Image credit: Wikimedia Commons