The Inevitability of Rust's Success
The Unexpected
Three independent forces are reshaping software development, each following its own logic, each seemingly unrelated.
The first is a security crisis:
70% of critical vulnerabilities in major platforms stem from memory safety issues 1 2, costing billions in breaches and patches.
The second is an economic constraint:
Data center energy consumption is growing at 12% annually, heading toward 3% of global electricity by 2030 3, while water consumption for cooling threatens to double or quadruple by 2028 4.
Third:
The rise of AI code generation, where training data quality increasingly determines model performance 5 6.
These forces favor a single solution: Rust.
Not because Rust was designed to solve all three problems, but because its fundamental design decisions — compiler-enforced memory safety, zero-cost abstractions, and an excellent type system — happen to address each point simultaneously.
This isn’t incremental improvement. It’s a structural advantage that compounds over time.
I. The Memory Safety Crisis: The 70% Problem
The numbers are stark and consistent across every major platform. Microsoft analyzed twelve years of security patches from 2006 to 2018 and found that 70% of all CVEs were memory safety issues 1.
Google’s Chromium security team reported an identical figure: 70% of high-severity bugs since 2015 were memory unsafety problems—use-after-free, buffer overflows, out-of-bounds access 2. In 2021, Google’s Project Zero documented that 67% of in-the-wild zero-day exploits were memory corruption vulnerabilities 7.
The most compelling evidence comes from Android. In 2019, 76% of Android platform vulnerabilities were memory safety issues. By 2024, after aggressive Rust adoption, that figure dropped to 24%—a 68% reduction in five years 8. The correlation is direct: zero memory safety vulnerabilities have been discovered in Android’s Rust code to date. The Rust code also has a rollback rate less than half that of C++, indicating higher code quality from the start 8.
This isn’t just a technical problem; it’s a government recommendation.
In November 2022, the NSA published “Software Memory Safety” guidelines recommending Rust explicitly 9. In December 2023, CISA, the NSA, the FBI, and international partners from Australia**, Canada, New Zealand, and the UK issued “The Case for Memory Safe Roadmaps,” urging software manufacturers to prioritize memory-safe languages 10. In February 2024, the White House Office of the National Cyber Director stated bluntly: “The highest leverage method to reduce memory safety vulnerabilities is to secure one of the building blocks of cyberspace: the programming language” 11.
Why not Java, Go, or Python? Each has fundamental limitations.
Java’s garbage collector introduces unpredictable pauses (p99 latencies of 300-500ms with G1GC) 12 and requires a JVM with megabytes of overhead, making it unsuitable for embedded systems, kernel modules, or real-time applications 13.
Go improves on Java but still suffers from GC pauses (1-500ms) that prevent use in real-time systems 14, and its runtime overhead prohibits bare-metal deployment — TinyGo offers only a limited subset 14. Python requires an interpreter, has the GIL preventing true parallelism, and is orders of magnitude slower than compiled languages 15.
Only Rust and C++ can operate at the systems level with zero runtime overhead. But C++ has spent thirty years proving that manual memory management in a complex language leads to the 70% problem. Rust enforces memory safety at compile time without garbage collection overhead.
Memory safety isn’t Rust’s differentiator — it’s table stakes for modern code in 2025.
The real story is what compiler-enforced correctness enables next.
II. Economics
Resource Scarcity
According to the International Energy Agency’s April 2025 report, global data center energy consumption will grow from 415 TWh in 2024 to 945 TWh by 2030—a 128% increase 3. For context, that’s approaching 3% of global electricity consumption, with a 12% annual growth rate that’s four times faster than overall electricity demand growth.
The water problem is worse. The Lawrence Berkeley National Laboratory’s 2024 study projects that U.S. data centers’ direct water consumption could double or quadruple from 17 billion gallons in 2023 to 34-68 billion gallons by 2028 4. Indirect water consumption for electricity generation is twelve times higher. Google consumed 8.1 billion gallons across its data centers in 2024, an 88% increase since 2019 16. These aren’t abstract numbers — Ireland and Singapore have imposed data center moratoriums due to grid saturation and resource constraints 17.
When energy and water become bottlenecks, performance per watt matters more than developer convenience.
The academic evidence is unambiguous. Pereira et al.’s 2017 ACM study measured energy consumption across 27 languages using Intel RAPL 18. Compiled languages averaged 120 joules per benchmark. Virtual machine languages (including Java) averaged 576 joules — 4.8 times more energy. Interpreted languages (Python, Ruby) averaged 2,365 joules — 19.7 times more than compiled languages. Java consumed twice the energy of C.
Python consumed 45 times more energy than C++.
Memory efficiency scales similarly. A benchmark measuring 1 million concurrent tasks showed Rust’s Tokio runtime using 213MB of memory 19. Java’s virtual threads required 1,154MB (5.4x more). Go required 2,658MB (12.4x more). Idle memory overhead tells the same story: Rust uses 0.36MB, Go uses 0.86MB, Java uses 160MB — 444 times more than Rust.
Binary sizes compound the problem: a minimal Rust web service compiles to 4.24MB in a Docker container with a scratch base image. The equivalent Go service requires 8.68MB, Java with JRE requires 113MB, and Python with Alpine multi-stage builds exceeds 391MB 20.
The production case studies quantify these theoretical advantages. Cloudflare replaced NGINX with Pingora, a Rust-based proxy handling over 1 trillion requests per day 21. The results: 70% less CPU consumption, 67% less memory usage, 5ms median latency improvement, and 80ms improvement at p95. For one major customer, connection reuse improved from 87.1% to 99.92%, eliminating 434 years of handshake time daily across all customers.
TikTok’s payment service migration from Go to Rust doubled throughput (105,000 QPS to 210,000 QPS on a critical endpoint), reduced CPU utilization from 78% to 52%, cut memory usage from 7.4% to 2.07% (72% reduction), and improved p99 latency from 19.87ms to 4.79ms (76% improvement) 22. The projected annual savings exceeded $300,000 from eliminating over 400 vCPU cores.
Datadog migrated its static analyzer from Java to Rust and achieved 3x faster analysis with 10x less memory, enabling real-time IDE integration that was previously impractical 23.
Discord’s migration from Go to Rust for the Read States service powering 11 million concurrent users eliminated GC spikes that occurred every 2 minutes, reduced latency from milliseconds to microseconds (best case 6.5x faster, worst case 160x faster), and made performance completely predictable 24. Grab’s counter service rewrite from Go to Rust achieved 5x resource efficiency reduction, cutting from 20 CPU cores to 4.5 cores at 1,000 QPS and reducing infrastructure costs by 70% 25.
These aren’t cherry-picked examples. They represent a consistent pattern: when organizations hit performance ceilings with garbage-collected languages and migrate to Rust, they achieve 50-70% resource reductions while improving reliability.
In a world heading toward energy and water constraints, with the EU mandating energy efficiency reporting 26 and carbon costs rising, these efficiency gains transition from optimization to competitive necessity.
Rust achieves performance parity with C++ (typically within 1-20% across benchmarks from the Computer Language Benchmarks Game) 27 while providing memory safety guarantees. That combination — C-level performance with enforced correctness—becomes decisive when resource costs escalate.
Full-Stack Unification: The Polyglot Tax
Software complexity is killing productivity, and polyglot architectures are a primary cause.
Context switching between languages, frameworks, imposes quantifiable costs. Gloria Mark’s 2008 CHI Conference research found that developers require 23 minutes and 15 seconds to fully regain focus after an interruption, with 45 minutes required for complex coding tasks 28. Interrupted work contains 25% more errors than uninterrupted work. Industry estimates place the cost of context switching at $50,000 per developer annually, with 20-40% productivity loss when working on multiple tasks 29.
The polyglot tax manifests in multiple ways. Serialization boundaries between services written in different languages create type safety gaps, security vulnerabilities from deserialization attacks, and performance degradation 30.
Frontend/backend validation logic must be duplicated and kept synchronized — attackers bypass client-side validation entirely, while developers forget to update one side when requirements change 31.
Build system chaos emerges when different components use Maven, pip, npm, each with their own dependency resolution, configuration files, and conventions 32.
Real companies provide the evidence. Uber’s microservices architecture grew to over 1,000 services with tangled dependencies that engineers called the “Death Star” 33. Finding and using appropriate services became difficult because each was structured differently, sometimes in different languages. Local standards failed because services couldn’t trust the availability of other microservices.
Netflix, processing 30 million requests per second, identified “variance” (operational drift) as one of three primary scaling challenges 34. Using multiple programming languages increased operational burden and learning curves, while business logic duplication across technologies compounded maintenance challenges.
The companies that succeeded with large codebases often rejected polyglot complexity.
Shopify maintained a 2.8 million line Ruby monolith with 1,000+ developers rather than fragment into microservices, explicitly citing the advantages of using the same language over being “more optimal”35.
DHH at Basecamp supported 6 platforms with just 12 programmers and 7 designers, arguing that “every time you extract a collaboration between objects to a collaboration between systems, you’re accepting a world of hurt” 36.
Etsy maintained monolithic PHP despite recognizing “better alternatives” because “the advantages of being more optimal do not outweigh the advantages of using the same language a lot”37.
Rust offers a different path: full-stack unification through extreme portability. Rust compiles natively to targets that Java, Go, and Python cannot reach.
Java requires a JVM with megabytes of overhead, prohibiting deployment to bare-metal microcontrollers, kernel modules, or devices with less than 1MB of RAM 13.
Go’s runtime overhead similarly prevents bare-metal execution, with TinyGo offering only a limited subset of language features 14.
Python requires an interpreter and cannot run in kernel space or on microcontrollers without MicroPython’s constrained environment 15.
Rust runs everywhere. Official platform support spans x86-64 and ARM64 on Windows, Linux, and macOS (Tier 1), plus iOS, Android, WebAssembly, and embedded microcontrollers including ARM Cortex-M (M0, M0+, M3, M4, M7), RISC-V, AVR (Arduino), and Xtensa (ESP32) 38. Production examples demonstrate the possibilities: ESA deployed Rust on the OPS-SAT CubeSat satellite 39. Microsoft’s Azure IoT Edge security daemon comprises 60,000 lines of Rust 40. STABL Energy has run ESP32 firmware written in Rust in production for over a year 41.
The unification advantage appears in production architectures.
1Password built its core library in 100% Rust with thin UI shells per platform, achieving approximately 70% code reuse across macOS, iOS, Windows, Android, Linux, browser extensions, and web 42.
Oxide Computer’s Hubris microcontroller OS is written entirely in Rust with zero C code in the entire system, running in production on ARM Cortex-M hardware 43.
The tonari team shared protocol code between embedded
firmware (compiled with #[no_std]) and desktop applications (compiled with std feature
enabled), describing it as “really cool to be able to share libraries between two projects which
run on radically different hardware” 44.
Full-stack Rust frameworks eliminate traditional boundaries. Leptos enables server-side rendering with a Rust backend and WASM frontend compiled from the same Rust code, sharing types and validation logic via Cargo workspaces 45. Dioxus supports Web, Desktop, Mobile, and SSR from a single codebase, with production use at Airbus, ESA, and Huawei 46. Tauri 2.0 deploys to Linux, macOS, Windows, Android, and iOS from one codebase 47.
The polyglot tax — context switching costs, duplicated logic, serialization boundaries, tooling complexity — represents wasted effort that compounds over time.
Rust’s ability to span embedded to cloud with a single language, a single build system (Cargo), and a single package repository (crates.io) eliminates entire categories of accidental complexity. When the same type definitions, validation logic, and business rules compile for microcontrollers, browsers, and servers, the productivity gains are structural, not incremental.
III. AI: The Decisive Factor
The most profound reason Rust will succeed is how it interacts with AI code generation. This is where the previous pillars converge into inevitability.
Training data quality trumps quantity.
Microsoft Research’s June 2023 paper “Textbooks Are All You Need” demonstrated this conclusively with phi-1, a 1.3 billion parameter model trained on only 7 billion tokens — 6 billion from filtered web data and less than 1 billion of synthetic “textbook quality” Python 5. This model achieved 50.6% pass@1 on HumanEval and 55.5% on MBPP, outperforming models ten times larger in parameters trained on datasets one hundred times larger.
The conclusion: “High quality data dramatically improves learning efficiency.” Quality trumped quantity.
The November 2024 study “Is Training Data Quality or Quantity More Impactful?” confirmed that “training data quality plays a more significant role in overall performance of SLMs” 48. Models trained on 25-50% of the original dataset with high quality matched or exceeded performance of models trained on the full noisy dataset. At 100% duplication (low quality), accuracy degraded catastrophically by 40%.
The most revealing study is “Clean Code, Better Models” (2025), which analyzed code smell propagation in LLM training data 6. The researchers found that 85%+ of code smells in CodeSearchNet-Python propagate to LLM outputs. Cleaning the dataset (removing 96.8% of code smells while maintaining 91.3% functional correctness) improved Qwen-Coder code completion by 12.2% and DeepSeek-V2 by 11.7%. Code smells in generated outputs dropped by 79-83%. Critically, models fine-tuned on the original smelly dataset performed worse than base models—low-quality training data actively degrades model performance.
This is where Rust’s structural advantages become decisive. The Stack dataset (BigCode) contains 40GB of Rust code compared to 193GB of C++ 49. At first glance, this appears to be a disadvantage for Rust—less training data means LLMs should perform worse. And indeed, the June 2024 GitHub Copilot LeetCode study showed Rust with 62.23% acceptance rate on 1,760 problems, trailing Java (75.66%) and C++ (73.33%) 50. The study attributed Rust’s lower performance to “not having such a huge codebase” in training data.
But this misses a fundamental point.
The C++ corpus encorporates many memory bugs and potential UB. According to the Fuzz Introspector indexing overview:
- C++: 405 projects
- Java (JVM): 228 projects
- Rust: 81 projects
Java studies found that 38-65% of commits fail to compile 51. C++ template errors produce 67-87 lines of “unintelligible mess of angle brackets” for simple mistakes 52.
Rust’s corpus is small but relatively clean, incorporating programming best practices and learnings of the last decade.
Cargo enforces dependency management and build reproducibility. Rustfmt standardizes formatting across the ecosystem. Clippy (the Rust linter) averages 21 warnings per thousand lines of code across 94,715 analyzed projects, with automated refactoring tools reducing warning density dramatically 53.
The training data doesn’t contain memory corruption patterns because the compiler prevents their existence.
This creates a virtuous cycle that accelerates over time:
- Rust’s compiler enforces quality, producing a high-quality training corpus
- The high-quality corpus trains LLMs to generate better Rust code
- Compiler feedback enables AI agents to converge rapidly on correct solutions
- AI-generated Rust code maintains high quality (no memory bugs introduced)
- The corpus improves, training better models in the next iteration
The compiler feedback loop is the key mechanism.
Microsoft Research’s RustAssistant (presented at ICSE 2025) demonstrated this with 74% accuracy fixing compilation errors on real-world GitHub repositories and 91-93% accuracy on focused benchmarks 54. The tool operates through iteration between the LLM and the Rust compiler: the compiler produces an error message, the LLM generates a fix as a code diff, the compiler verifies the fix, and if new errors appear, context flows back to the LLM. This loop continues until the code compiles error-free.
The Rust compiler’s error messages enable this convergence.
A Google internal survey found 91% of developers satisfied with Rust compiler diagnostic quality 55. The 2021 Rust Survey reported 90% of respondents praised compile error messages 56. An academic study found that 53.6% of Rust compiler violations contain all necessary information to fix the error directly 55.
Comparative analysis across eight languages ranked Rust #1 for error message quality, with the assessment: “Best overall. Makes it easy to get into language or fix errors. Shows similar methods when mistakes made.” Java ranked last with its short ‘cannot find symbol’ message", offering minimal helpful information" 57.
A real-world case study from runmat.org quantified this advantage 58. Using LLM-generated Rust to build a MATLAB-compatible runtime, the developer reported that the “fast generate-compile-fix loop quickly prunes bad branches” because “each generated snippet is validated against strict rules, helping models converge faster on usable solutions.” The result: “Transformed what would require years into a three-week project.”
The contrast with C++ is stark. C++ allows undefined behavior that compiles successfully but fails at runtime in unpredictable ways. AI agents can generate syntactically correct C++ that compiles but contains use-after-free bugs, buffer overflows, or data races. The agent receives no feedback until runtime, and even then, undefined behavior may not manifest consistently. The agent loops without converging.
In Rust, the compiler rejects unsafe code. The agent receives immediate, actionable feedback. It iterates until compilation succeeds. And when Rust code compiles, it typically works — the type system, borrow checker, and lifetime analysis eliminate entire bug classes.
AI agents can achieve higher success rates in Rust than in languages with weaker compile-time guarantees, even with less training data.
As AI agents write more code, languages with compiler-enforced correctness and helpful error messages dominate.
AI agents can loop until compilation success in Rust. They can get stuck in C++ undefined behavior.
This is the winning argument.
Rust’s current disadvantage in LLM performance (smaller training corpus, lower Copilot acceptance rates) inverts into an advantage as AI code generation scales. The compiler feedback loop enables 74-93% AI agent convergence rates. And every line of AI-generated Rust code that passes the compiler adds high-quality training data for the next model iteration.
When human developers and AI agents both prefer the language with the best compiler, network effects accelerate.
More Rust code → better AI tools for Rust → more developers productive in Rust → more code. The flywheel spins faster because the compiler guarantees corpus quality.
The Inevitability
Three forces converge, each following independent logic:
-
Computing needs memory safety. Governments mandate it (NSA, CISA, White House ONCD) [9,10,11]. Companies demonstrate it (Android reduced vulnerabilities by 68% in five years with Rust) 8. The 70% problem demands solutions beyond manual memory management.
-
Economics demand efficiency. Data centers consume 415 TWh growing to 945 TWh by 2030 3. Water consumption may double or quadruple by 20284. Production migrations show 50-70% resource reductions [21,22,23,24,25]. When energy and water constrain growth, performance per watt becomes competitive advantage.
-
AI and humans prefer the best compiler feedback. Training data quality matters more than quantity (phi-1 outperformed 10x larger models) 5. Clean corpora train better models (12-17% improvement from removing code smells) 6. Compiler iteration enables 74-93% AI convergence rates 54. Small-but-clean corpus beats large-but-contaminated corpus 49.
Rust satisfies all three requirements simultaneously.
C++ offers equivalent performance but lacks safety, generating the 70% CVE problem and contaminating training data with undefined behavior.
Java and Go provide safety through garbage collection but suffer efficiency penalties (2-5x energy consumption, 5-12x memory overhead, GC pauses) that prevent deployment to embedded systems, real-time applications, and kernel modules.
Python maximizes developer productivity for scripting but consumes 45x more energy and cannot operate at systems level.
Rust achieves C-level performance (within 1-20% on benchmarks) 27 with enforced memory safety (zero memory bugs in Android’s Rust code) 8, deploys everywhere from embedded microcontrollers to satellites to cloud services (enabling full-stack unification) [38,39,40,41], and provides compiler feedback quality that satisfies 91% of developers and enables 74-93% AI agent convergence rates [55,56].
Each pillar reinforces the others:
Memory safety improves training corpus quality, efficiency enables broader deployment across the stack, full-stack unification reduces complexity that AI agents must navigate.
Altogether this forms a compelling story for the future of Rust!
VII. References
-
Microsoft Security Response Center (2019). “A proactive approach to more secure code.” https://www.microsoft.com/en-us/msrc/blog/2019/07/a-proactive-approach-to-more-secure-code/ ↩︎ ↩︎
-
Chromium Security Team. “Memory Safety.” https://www.chromium.org/Home/chromium-security/memory-safety/ ↩︎ ↩︎
-
International Energy Agency (2025). “Energy and AI” Report (Executive Summary). https://www.iea.org/reports/energy-and-ai/executive-summary ↩︎ ↩︎ ↩︎
-
Lawrence Berkeley National Laboratory (2024). “2024 US Data Center Energy Usage Report.” https://eta.lbl.gov/publications/2024-lbnl-data-center-energy-usage-report ↩︎ ↩︎ ↩︎
-
Gunasekar et al. (2023). “Textbooks Are All You Need.” Microsoft Research, arXiv:2306.11644. https://arxiv.org/abs/2306.11644 ↩︎ ↩︎ ↩︎
-
“Clean Code, Better Models” (2025). arXiv:2508.11958v1. https://arxiv.org/html/2508.11958v1 ↩︎ ↩︎ ↩︎
-
Google Project Zero (2022). “The More You Know, The More You Know You Don’t Know.” https://googleprojectzero.blogspot.com/2022/04/the-more-you-know-more-you-know-you.html ↩︎
-
Google Security Blog (2024). “Eliminating Memory Safety Vulnerabilities at the Source.” https://security.googleblog.com/2024/09/eliminating-memory-safety-vulnerabilities-Android.html ↩︎ ↩︎ ↩︎ ↩︎
-
National Security Agency (2022). “Software Memory Safety” CSI Document ID: U/OO/219936-22. https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI_SOFTWARE_MEMORY_SAFETY.PDF ↩︎
-
CISA (2023). “The Case for Memory Safe Roadmaps.” Multi-agency publication. https://www.cisa.gov/sites/default/files/2023-12/The-Case-for-Memory-Safe-Roadmaps-508c.pdf ↩︎
-
White House ONCD (2024). “Back to the Building Blocks: A Path Toward Secure and Measurable Software.” https://bidenwhitehouse.archives.gov/wp-content/uploads/2024/02/Final-ONCD-Technical-Report.pdf ↩︎
-
HBase Blog and various Java GC performance tuning guides. G1GC p99 latencies 300-500ms typical. ↩︎
-
Jetbrains (2025). “Rust vs Java.” https://blog.jetbrains.com/rust/2025/08/01/rust-vs-java/ ↩︎ ↩︎
-
Bitfield Consulting (2025). “Rust vs Go 2025.” https://bitfieldconsulting.com/posts/rust-vs-go ↩︎ ↩︎ ↩︎
-
DataCamp. “Rust vs Python.” https://www.datacamp.com/blog/rust-vs-python ↩︎ ↩︎
-
Google Environmental Reports (2024/2025). Water consumption data. ↩︎
-
Various news sources on Ireland (EirGrid) and Singapore data center moratoriums. ↩︎
-
Pereira et al. (2017). “Energy Efficiency Across Programming Languages.” ACM SIGPLAN. https://greenlab.di.uminho.pt/wp-content/uploads/2017/09/paperSLE.pdf and https://dl.acm.org/doi/10.1145/3136014.3136031 ↩︎
-
Piotr Kołaczkowski. “How Much Memory Do You Need to Run 1 Million Concurrent Tasks?” https://pkolaczk.github.io/memory-consumption-of-async/ ↩︎
-
MichalStrehovsky/sizegame (GitHub). https://github.com/MichalStrehovsky/sizegame ↩︎
-
Cloudflare Blog (2022). “How we built Pingora, the proxy that connects Cloudflare to the Internet.” https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/ ↩︎
-
wxiaoyun.com (2024). “Rust Rewrite Case Study.” https://wxiaoyun.com/blog/rust-rewrite-case-study/ ↩︎
-
Datadog Blog. “How we migrated our static analyzer from Java to Rust.” https://www.datadoghq.com/blog/engineering/how-we-migrated-our-static-analyzer-from-java-to-rust/ ↩︎
-
Discord Engineering (2020). “Why Discord is Switching from Go to Rust.” https://discord.com/blog/why-discord-is-switching-from-go-to-rust ↩︎
-
Grab Engineering (2025). “Counter Service: How We Rewrote It In Rust.” https://engineering.grab.com/counter-service-how-we-rewrote-it-in-rust ↩︎
-
European Commission. EU Energy Efficiency Directive (EED) 2024 data center reporting requirements. ↩︎
-
Computer Language Benchmarks Game (Debian). https://benchmarksgame-team.pages.debian.net/benchmarksgame/ ↩︎ ↩︎
-
Mark, Gloria et al. (2008). “The cost of interrupted work: More speed and stress.” CHI Conference. https://doi.org/10.1145/1357054.1357072 ↩︎
-
Tech World with Milan newsletter. “Context Switching is the Main Productivity Killer.” https://newsletter.techworld-with-milan.com/p/context-switching-is-the-main-productivity ↩︎
-
Daniel Miessler. “Serialization Security Bugs Explained.” https://danielmiessler.com/blog/serialization-security-bugs-explained ↩︎
-
OWASP Community. “Improper Data Validation.” https://owasp.org/www-community/vulnerabilities/Improper_Data_Validation ↩︎
-
Dev.to. “One Project, One Toolchain: Taming Polyglot Development.” https://dev.to/codigger/one-project-one-toolchain-taming-polyglot-development-with-ose-1o7p ↩︎
-
Netguru. “Scaling Microservices.” https://www.netguru.com/blog/scaling-microservices ↩︎
-
System Design One newsletter. “Netflix Microservices.” https://newsletter.systemdesign.one/p/netflix-microservices ↩︎
-
Shopify Engineering. “Deconstructing the Monolith: Designing Software that Maximizes Developer Productivity.” https://shopify.engineering/deconstructing-monolith-designing-software-maximizes-developer-productivity ↩︎
-
DHH/Signal v. Noise. “The Majestic Monolith.” https://signalvnoid.com/svn3/the-majestic-monolith/ ↩︎
-
Medium. “Microservices, Monoliths and Laser Nail Guns: How Etsy Finds the Right Focus.” https://medium.com/s-c-a-l-e/microservices-monoliths-and-laser-nail-guns-how-etsy-finds-the-right-focus-in-a-sea-of-cf718a92dc90 ↩︎
-
Rust Platform Support Documentation. https://doc.rust-lang.org/nightly/rustc/platform-support.html ↩︎
-
arXiv paper. “Evaluation of Rust usage in space applications.” https://arxiv.org/html/2405.18135v1 ↩︎
-
Microsoft Security Blog (2019). “Building the Azure IoT Edge Security Daemon in Rust.” https://msrc.microsoft.com/blog/2019/09/building-the-azure-iot-edge-security-daemon-in-rust/ ↩︎
-
STABL Energy case study. https://klizos.com/rust-iot-ultimate-ally-for-high-performance-devices/ ↩︎
-
Serokell. “Rust in Production: 1Password.” https://serokell.io/blog/rust-in-production-1password ↩︎
-
Oxide Computer. “Hubris and Humility.” https://oxide.computer/blog/hubris-and-humility and https://github.com/oxidecomputer/hubris ↩︎
-
tonari blog. “Rust Simple Hardware Project.” https://blog.tonari.no/rust-simple-hardware-project ↩︎
-
Leptos framework documentation. https://leptos.dev/ ↩︎
-
Dioxus framework. https://dioxuslabs.com/ ↩︎
-
Tauri 2.0 documentation. https://tauri.app/ ↩︎
-
arXiv:2411.15821. “Is Training Data Quality or Quantity More Impactful?” https://arxiv.org/abs/2411.15821 ↩︎
-
BigCode Project. “The Stack: 3 TB of permissively licensed source code.” arXiv paper. https://arxiv.org/pdf/2211.15533 and https://huggingface.co/datasets/bigcode/the-stack ↩︎ ↩︎
-
arXiv paper (2024). “GitHub Copilot: the perfect Code compLeeter?” https://arxiv.org/html/2406.11326v1 ↩︎
-
ACM paper. “Java Software Buildability.” https://dl.acm.org/doi/10.1145/3001878.3001882 ↩︎
-
GNOME Developer Blog. “GCC vs Clang for Error Messages.” https://blogs.gnome.org/mortenw/2014/01/27/gcc-vs-clang-for-error-messages/ ↩︎
-
arXiv paper. “Unleashing the Power of Clippy.” https://arxiv.org/html/2310.11738 ↩︎
-
Microsoft Research (2025). “RustAssistant: Using LLMs to Fix Compilation Errors in Rust Code.” ICSE 2025. https://arxiv.org/abs/2308.05177 and https://www.microsoft.com/en-us/research/publication/rustassistant-using-llms-to-fix-compilation-errors-in-rust-code/ ↩︎ ↩︎
-
Google Open Source Blog (2023). “Rust Fact vs Fiction: 5 Insights from Google’s Rust Journey 2022.” https://opensource.googleblog.com/2023/06/rust-fact-vs-fiction-5-insights-from-googles-rust-journey-2022.html ↩︎ ↩︎
-
Rust Blog (2022). “Rust Survey 2021.” https://blog.rust-lang.org/2022/02/15/Rust-Survey-2021.html ↩︎
-
Amazing CTO. “Comparing Compiler Errors in 8 Languages.” https://www.amazingcto.com/developer-productivity-compiler-errors/ ↩︎
-
runmat.org (2024). “Why Rust: Choosing Rust for LLM-Generated Code.” https://runmat.org/blog/why-rust ↩︎