Deciding which programming language to learn is a big question for developers today because of the huge investment in time it takes. But that question could be rendered redundant in a future where artificial intelligence (AI) models do all the heavy lifting by understanding a problem's description and coding a solution.
Researchers from Google's AI-focused unit DeepMind claim its AlphaCode system can express solutions to problems in code that achieves a median-level score in programming competitions undertaken by new programmers. Those competitions require humans to comprehend a problem described in natural language and then code an algorithm efficiently.
Here's a list of the most popular programming languages and where to learn them
Read nowIn a new non-peer reviewed paper, DeepMind researchers detail how AlphaCode achieved an average ranking of the top 54.3% of participants in 10 previously held programming competitions with more than 5,000 participants. The competitions were hosted on the Codeforces code competition platform.
SEE:Employers are desperate for data scientists as demand booms
DeepMind claims AlphaCode is first AI code generation system that performs at a competitive level in code competitions for human developers. The research could improve programmer productivity and may help non-programmers express a solution without knowing how to write code.
Human contestants and, therefore, AlphaCode needed to parse a description of a challenge or puzzle and quickly write a program to solve them. This is more difficult than training a model using GitHub data to solve a simple coding challenge.
Like humans, AlphaCode needed to comprehend a multi-paragraph natural language description of the problem, background narrative details, and a description of the desired solution in terms of input and output.
To solve the problem, the competitor would need to create an algorithm and then implement the algorithm efficiently, potentially requiring they select, say, a faster programming language like C++ over Python to overcome those constraints.
AlphaCode's pre-training dataset included 715 GB of code from files taken from GitHub repositories written in C++, C#, Go, Java, JavaScript/TypeScript, Lua, Python, PHP, Ruby, Rust, and Scala. The team fine-tuned the model using datasets of competitive programming problems scraped from Codeforces and similar datasets.
The boost DeepMind gave AlphaCode was achieved by combining large-scale transformer models. Examples of these include OpenAI's GPT-3 and Google's BERT language model. DeepMind used transformer-based language models to generate code then filter the output to a small set of "promising programs" that were submitted for evaluation.
"At evaluation time, we create a massive amount of C++ and Python programs for each problem, orders of magnitude larger than previous work," DeepMind's AlphaCode team explain in a blogpost.
"Then we filter, cluster, and re-rank those solutions to a small set of 10 candidate programs that we submit for external assessment. This automated system replaces competitors' trial-and-error process of debugging, compiling, passing tests, and eventually submitting."
DeepMind demonstrates how AlphaCode codes a solution to a given problem here.
DeepMind considers a few potential downsides to what it's trying to achieve. For example, models can generate code with exploitable weaknesses, including "unintentional vulnerabilities from outdated code or intentional ones injected by malicious actors into the training set."
SEE:Web developer or CTO, which tech jobs have the fastest growing salaries?
Also, there are environmental costs. Training the model required "hundreds of petaFLOPS days" in Google's data centers. But AI code generation over the longer term "could lead to systems that can recursively write and improve themselves, rapidly leading to more and more advanced systems."
There is a risk that automation reduces demand for developers, but DeepMind points to limitations of today's code completion tools that greatly improve programming productivity, yet until recently were limited to single-line suggestions and restricted to certain languages or short code snippets.
However, DeepMind emphasizes its work is nowhere near being a threat to human programmers, but that its systems need to be able to develop problem-solving capabilities to help humanity.
"Our exploration into code generation leaves vast room for improvement and hints at even more exciting ideas that could help programmers improve their productivity and open up the field to people who do not currently write code," DeepMind researchers say.