TechieTricks.com
ChatGPT may hinder the cybersecurity industry ChatGPT may hinder the cybersecurity industry
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More Since... ChatGPT may hinder the cybersecurity industry


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Since its launch in November 2022, ChatGPT, an artificial intelligence (AI) chatbot, has been causing quite a stir because of the software’s surprisingly human and accurate responses. 

The auto-generative system reached a record-breaking 100 million monthly active users only two months after launching. However, while its popularity continues to grow, the current discussion within the cybersecurity industry is whether this type of technology will aid in making the internet safer or play right into the hands of those trying to cause chaos. 

AI software has a variety of cybersecurity use cases, including advanced data analysis, automating repetitive tasks, and helping to calculate risk scores. However, soon after its debut, it was quickly established that this easy-to-use, freely available chatbot could also help hackers infiltrate software and develop sophisticated phishing tools.      

So, is ChatGPT a gift from the cybersecurity gods or a plague being used to punish? To discover the answer, we must look at the pros, cons and future. Let’s dive in.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

What are the current dangers of ChatGPT? 

Like any new technological advancement, there will always be some negative implications, and ChatGPT is no different.

Currently, the most talked-about issue regarding the chatbot comes from the ease of creating very convincing phishing texts, likely to be used in malicious emails. Due to its lack of security measures, it’s been easy for threat actors, whose first language may not be English for example, to use the ChatGPT mechanism to create an eloquent, enticing message written with near-perfect grammar in seconds.   

And since Americans lost $40 billion in 2022 to these scams, it’s easy to see why criminals would use ChatGPT to get a slice of this lucrative illicit pie.

AI-powered chatbots also raise the question of job security. Of course, the current system couldn’t replace a highly trained professional, but this technology can significantly reduce the number of logs and reports that need to be inspected by an employee. This could impact how many analysts a security operation center (SOC) would need to employ. 

While the software does offer several advantages to cybersecurity businesses, there will be plenty of companies that will adopt the technology simply because of its current popularity and attempt to entice new customers with it. However, using the technology purely because of its fad status can lead to misuse. Companies may not install adequate safety measures, hindering progress in building an effective security program.

The cybersecurity benefits of ChatGPT

As with any new technology, disruption is an inevitable component, but that doesn’t have to be a bad thing.

Cybersecurity companies can add an extra layer of intelligence to their manual efforts of sifting through audit logs or inspecting network packets to distinguish threats from false alarms. 

Because of ChatGPT’s ability to detect patterns and search within specific parameters, it can also be used for repetitive tasks and generating reports. Cyber companies can then more intelligently calculate risk scores for threats impacting organizations by using ChatGPT as a super-powered research assistant. 

For example, Orca Security, an Israeli-based cybersecurity company, has started to use ChatGPT’s superior analysis qualities to plow through its ocean of data and aid with security alerts. By realizing early how the chatbot can improve its day-to-day running, the company can also learn from the technology, which gives it a unique advantage in tweaking their models to optimize how ChatGPT works for its business.

Furthermore, the chatbot’s natural language processing, which makes it so good at writing phishing emails, means it’s also perfect for creating complex security policies. These articulate texts could be used on cybersecurity websites and in training documents, saving precious time for valued team members.

The future of ChatGPT

ChatGPT’s AI technology is readily available to most of the world. Therefore, as with any other battle, it’s simply a race to see which side will make better use of the technology. 

Cybersecurity companies will need to continuously combat nefarious users who will figure out ways to use ChatGPT to cause harm in ways that cybersecurity businesses haven’t yet fathomed. And yet this fact hasn’t deterred investors, and the future of ChatGPT looks very bright. With Microsoft investing $10 billion in Open AI, it’s clear that ChatGPT’s knowledge and abilities will continue to expand. 

For future versions of this technology, software developers need to pay attention to its lack of safety measures, and the devil will be in the details.

ChatGPT probably won’t be able to thwart this problem to a large degree. It can have mechanisms in place to evaluate users’ habits and home in on individuals who use obvious prompts like, “write me a phishing email as if I’m someone’s boss,” or try to validate individuals’ identities. 

Open AI could even work with researchers to train its datasets to evaluate when their text has been used in attacks elsewhere.

However, all of these ideas pose a slew of problems, including mounting costs and data protection issues.

For the current phishing epidemic to be addressed, more people need education and awareness to identify these attacks. And the industry needs more investments from cell carriers and email providers to mitigate how many attacks happen in the wild.

Wrapping up

So many products and services will stem from ChatGPT, bringing tremendous value to help protect businesses as they work on changing the world. And there will also be plenty of new tools created by hackers that will allow them to attack more people in less time and in new ways.

AI-powered chatbots are here to stay, and ChatGPT has competition, with Google’s Bard and Microsoft Bing’s software looking to give Open AI’s creation a run for its money. Nonetheless, it’s paramount that cybersecurity companies look at ChatGPT both as an offensive strategy and a defensive strategy, not being enamored with the opportunity to simply generate more revenue.

Taylor Hersom is founder and CEO of Eden Data.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link

techietr