You can now run the Qiskit Code Assistant locally easily. Download optimized models in GGUF format, install Ollama, and configure your VSCode or JupyterLab extension with a single command.
The Curry–Howard correspondence—propositions as types, proofs as programs—offers a conceptual framework for understanding what's missing in current LLMs and what becomes possible when AI systems learn to construct and verify proofs natively.
We've upgraded the Qiskit Code Assistant! Last month, we introduced mistral-small-3.2-24b-qiskit, replacing granite-3.3-8b-qiskit, delivering better accuracy across key benchmarks and more precise responses for quantum programming tasks.
Announcing the latest open-source LLM releases from the Qiskit Code Assistant team, featuring Qiskit 2.0 compatibility, enhanced text understanding, and new models including Granite 3.3, Granite 3.2, and Qwen2.5-Coder series.
Released a new version of Qiskit HumanEval compatible with Qiskit 1.4, featuring significant improvements to the benchmark including more robust and rigorous code execution tests for more accurate evaluations of LLM-generated quantum code.
Exciting update: Qiskit Code Assistant service now exposes compatible endpoints with OpenAI Completions API. This enables seamless usage via existing libraries like OpenAI and LiteLLM, making it easy to infuse Qiskit knowledge into your LLM pipelines.
Released granite-8b-qiskit-rc-0.10, the latest revision of the LLMs that empower Qiskit Code Assistant. Trained on significantly expanded Qiskit synthetic dataset, this marks the final model using the current training approach as we pivot to newer Granite base models and cutting-edge techniques.
Shipped granite-8b-qiskit-rc-0.10, the latest revision of LLMs empowering Qiskit Code Assistant. Trained on significantly expanded Qiskit synthetic dataset, marking the end of an era as the final model using current training approach before pivoting to newer Granite base models.
Presenting the Qiskit HumanEval benchmark for LLMs at IEEE Quantum Week 2024 in the SYS-BNCH Benchmarking session. Available afterwards at the IBM Quantum booth to discuss AI and quantum computing initiatives.
Published research paper introducing Qiskit HumanEval dataset for evaluating Large Language Models capability to generate quantum computing code. The dataset comprises more than 100 quantum computing tasks with prompts, solutions, test cases, and difficulty ratings, establishing benchmarks for generative AI tools in quantum code development.