[go: up one dir, main page]

Jump to content

GitHub Copilot

From Wikipedia, the free encyclopedia
(Redirected from Github Copilot)

GitHub Copilot
Developer(s)GitHub, OpenAI
Initial releaseOctober 2021; 3 years ago (2021-10)
Stable release
1.7.4421
Operating systemMicrosoft Windows, Linux, macOS, Web
Websitecopilot.github.com

GitHub Copilot is a code completion and automatic programming tool developed by GitHub and OpenAI that assists users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code.[1] Currently available by subscription to individual developers and to businesses, the generative artificial intelligence software was first announced by GitHub on 29 June 2021, and works best for users coding in Python, JavaScript, TypeScript, Ruby, and Go.[2] In March 2023 GitHub announced plans for "Copilot X", which will incorporate a chatbot based on GPT-4, as well as support for voice commands, into Copilot.[3]

History

[edit]

On June 29, 2021, GitHub announced GitHub Copilot for technical preview in the Visual Studio Code development environment.[1][4] GitHub Copilot was released as a plugin on the JetBrains marketplace on October 29, 2021.[5] October 27, 2021, GitHub released the GitHub Copilot Neovim plugin as a public repository.[6] GitHub announced Copilot's availability for the Visual Studio 2022 IDE on March 29, 2022.[7] On June 21, 2022, GitHub announced that Copilot was out of "technical preview", and is available as a subscription-based service for individual developers.[8]

GitHub Copilot is the evolution of the 'Bing Code Search' plugin for Visual Studio 2013, which was a Microsoft Research project released in February 2014.[9] This plugin integrated with various sources, including MSDN and Stack Overflow, to provide high-quality contextually relevant code snippets in response to natural language queries.[10]

Features

[edit]

When provided with a programming problem in natural language, Copilot is capable of generating solution code.[11] It is also able to describe input code in English and translate code between programming languages.[11]

According to its website, GitHub Copilot includes assistive features for programmers, such as the conversion of code comments to runnable code, and autocomplete for chunks of code, repetitive sections of code, and entire methods and/or functions.[2][12] GitHub reports that Copilot’s autocomplete feature is accurate roughly half of the time; with some Python function header code, for example, Copilot correctly autocompleted the rest of the function body code 43% of the time on the first try and 57% of the time after ten attempts.[2]

GitHub states that Copilot’s features allow programmers to navigate unfamiliar coding frameworks and languages by reducing the amount of time users spend reading documentation.[2]

Implementation

[edit]

GitHub Copilot was initially powered by the OpenAI Codex,[13] which is a modified, production version of the Generative Pre-trained Transformer 3 (GPT-3), a language model using deep-learning to produce human-like text.[14] The Codex model is additionally trained on gigabytes of source code in a dozen programming languages.

Copilot’s OpenAI Codex is trained on a selection of the English language, public GitHub repositories, and other publicly available source code.[2] This includes a filtered dataset of 159 gigabytes of Python code sourced from 54 million public GitHub repositories.[15]

OpenAI’s GPT-3 is licensed exclusively to Microsoft, GitHub’s parent company.[16]

In November 2023, Copilot Chat was updated to use OpenAI's GPT-4 model.[17]

Reception

[edit]

Since Copilot's release, there have been concerns with its security and educational impact, as well as licensing controversy surrounding the code it produces.[18][11][19]

Licensing controversy

[edit]

While GitHub CEO Nat Friedman stated in June 2021 that "training ML systems on public data is fair use",[20] a class-action lawsuit filed in November 2022 called this "pure speculation", asserting that "no Court has considered the question of whether 'training ML systems on public data is fair use.'"[21] The lawsuit from Joseph Saveri Law Firm, LLP challenges the legality of Copilot on several claims, ranging from breach of contract with GitHub's users, to breach of privacy under the CCPA for sharing PII.[22][21]

GitHub admits that a small proportion of the tool's output may be copied verbatim, which has led to fears that the output code is insufficiently transformative to be classified as fair use and may infringe on the copyright of the original owner.[18] In June 2022, the Software Freedom Conservancy announced it would end all uses of GitHub in its own projects,[23] accusing Copilot of ignoring code licenses used in training data.[24] In a customer-support message, GitHub stated that "training machine learning models on publicly available data is considered fair use across the machine learning community",[21] but the class action lawsuit called this "false" and additionally noted that "regardless of this concept's level of acceptance in 'the machine learning community,' under Federal law, it is illegal".[21]

FSF white papers

[edit]

On July 28 2021, the Free Software Foundation (FSF) published a funded call for white papers on philosophical and legal questions around Copilot.[25] Donald Robertson, the Licensing and Compliance Manager of the FSF, stated that "Copilot raises many [...] questions which require deeper examination."[25] On February 24, 2022, the FSF announced they had received 22 papers on the subject and using an anonymous review process chose 5 papers to highlight.[26]

Privacy concerns

[edit]

The Copilot service is cloud-based and requires continuous communication with the GitHub Copilot servers.[27] This opaque architecture has fueled concerns over telemetry and data mining of individual keystrokes.[28][29]

Security concerns with direct use of model output without oversight or testing

[edit]

A paper accepted for publication in the IEEE Symposium on Security and Privacy in 2022 assessed the security of code generated by Copilot for the MITRE’s top 25 code weakness enumerations (e.g., cross-site scripting, path traversal) across 89 different scenarios and 1,689 programs.[19] This was done along the axes of diversity of weaknesses (its ability to respond to scenarios that may lead to various code weaknesses), diversity of prompts (its ability to respond to the same code weakness with subtle variation), and diversity of domains (its ability to generate register transfer level hardware specifications in Verilog).[19] The study found that across these axes in multiple languages, 39.33% of top suggestions and 40.73% of total suggestions led to code vulnerabilities. Additionally, they found that small, non-semantic (i.e., comments) changes made to code could impact code safety.[19]

Education concerns

[edit]

A February 2022 paper released by the Association for Computing Machinery evaluates the impact Codex, the technology used by GitHub Copilot, may have on the education of novice programmers.[11] The study utilizes assessment questions from an introductory programming class at the University of Auckland and compares Codex’s responses with student performance.[11] Researchers found that Codex, on average, performed better than most students; however, its performance decreased on questions that limited what features could be used in the solution (e.g., conditionals, collections, and loops).[11] Given this type of problem, "only two of [Codex’s] 10 solutions produced the correct output, but both [...] violated [the] constraint." The paper concludes that Codex may be useful in providing a variety of solutions to learners, but may also lead to over-reliance and plagiarism.[11]

See also

[edit]

References

[edit]
  1. ^ a b Gershgorn, Dave (29 June 2021). "GitHub and OpenAI launch a new AI tool that generates its own code". The Verge. Retrieved 6 July 2021.
  2. ^ a b c d e "GitHub Copilot · Your AI pair programmer". GitHub Copilot. Retrieved 7 April 2022.
  3. ^ "GitHub Copilot gets a new ChatGPT-like assistant to help developers write and fix code". The Verge. 22 March 2023. Retrieved 5 September 2023.
  4. ^ "Introducing GitHub Copilot: your AI pair programmer". The GitHub Blog. 29 June 2021. Retrieved 7 April 2022.
  5. ^ "GitHub Copilot - IntelliJ IDEs Plugin | Marketplace". JetBrains Marketplace. Retrieved 7 April 2022.
  6. ^ Copilot.vim, GitHub, 7 April 2022, retrieved 7 April 2022
  7. ^ "GitHub Copilot now available for Visual Studio 2022". The GitHub Blog. 29 March 2022. Retrieved 7 April 2022.
  8. ^ "GitHub Copilot is generally available to all developers". The GitHub Blog. 21 June 2022. Retrieved 21 June 2022.
  9. ^ Lardinois, Frederic (17 February 2014). "Microsoft Launches Smart Visual Studio Add-On For Code Snippet Search". TechCrunch. Retrieved 5 September 2023.
  10. ^ "Bing Code Search". Microsoft Research. 11 February 2014. Retrieved 5 September 2023.
  11. ^ a b c d e f g Finnie-Ansley, James; Denny, Paul; Becker, Brett A.; Luxton-Reilly, Andrew; Prather, James (14 February 2022). "The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming". Australasian Computing Education Conference. ACE '22. New York, NY, USA: Association for Computing Machinery. pp. 10–19. doi:10.1145/3511861.3511863. ISBN 978-1-4503-9643-1. S2CID 246681316.
  12. ^ Sobania, Dominik; Schweim, Dirk; Rothlauf, Franz (2022). "A Comprehensive Survey on Program Synthesis with Evolutionary Algorithms". IEEE Transactions on Evolutionary Computation. 27: 82–97. doi:10.1109/TEVC.2022.3162324. ISSN 1941-0026. S2CID 247721793.
  13. ^ Krill, Paul (12 August 2021). "OpenAI offers API for GitHub Copilot AI model". InfoWorld. Retrieved 7 April 2022.
  14. ^ "OpenAI Releases GPT-3, The Largest Model So Far". Analytics India Magazine. 3 June 2020. Retrieved 7 April 2022.
  15. ^ "OpenAI Announces 12 Billion Parameter Code-Generation AI Codex". InfoQ. Retrieved 7 April 2022.
  16. ^ "OpenAI is giving Microsoft exclusive access to its GPT-3 language model". MIT Technology Review. Retrieved 7 April 2022.
  17. ^ "GitHub Copilot – November 30th Update · GitHub Changelog". 30 November 2023.
  18. ^ a b c d Pearce, Hammond; Ahmad, Baleegh; Tan, Benjamin; Dolan-Gavitt, Brendan; Karri, Ramesh (16 December 2021). "Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions". arXiv:2108.09293 [cs.CR].
  19. ^ Nat Friedman [@natfriedman] (29 June 2021). "In general: (1) training ML systems on public data is fair use" (Tweet). Archived from the original on 30 June 2021. Retrieved 23 February 2023 – via Twitter.
  20. ^ a b c d Butterick, Matthew (3 November 2022). "GitHub Copilot litigation" (PDF). githubcopilotlitigation.com. Joseph Saveri Law Firm. Archived from the original on 3 November 2022. Retrieved 12 February 2023.
  21. ^ Vincent, James (8 November 2022). "The lawsuit that could rewrite the rules of AI copyright". The Verge. Retrieved 7 December 2022.
  22. ^ "Give Up GitHub: The Time Has Come!". Software Freedom Conservancy. Retrieved 8 September 2022.
  23. ^ "If Software is My Copilot, Who Programmed My Software?". Software Freedom Conservancy. Retrieved 8 September 2022.
  24. ^ a b "FSF-funded call for white papers on philosophical and legal questions around Copilot". Free Software Foundation. 28 July 2021. Retrieved 11 August 2021.
  25. ^ "Publication of the FSF-funded white papers on questions around Copilot". Free Software Foundation. 24 February 2022.
  26. ^ "GitHub Copilot - Your AI pair programmer". GitHub. Retrieved 18 October 2022.
  27. ^ "CoPilot: Privacy & DataMining". GitHub. Retrieved 18 October 2022.
  28. ^ Stallman, Richard. "Who does that server really serve?". gnu.org. Retrieved 18 October 2022.
[edit]