Introduction

Thank you for your interest in contributing to Cargo! This guide provides an overview of how to contribute to Cargo, how to dive into the code, and how the testing infrastructure works.

There are many ways to contribute, such as helping other users, filing issues, improving the documentation, fixing bugs, and working on small and large features.

If you have a general question about Cargo or it's internals, feel free to ask on Zulip.

This guide assumes you have some familiarity with Rust, and how to use Cargo, rustup, and general development tools like git.

Please also read the Rust Code of Conduct.

Issue Tracker

Cargo's issue tracker is located at https://github.com/rust-lang/cargo/issues/. This is the primary spot where we track bugs and small feature requests. See Process for more about our process for proposing changes.

Filing issues

We can't fix what we don't know about, so please report problems liberally. This includes problems with understanding the documentation, unhelpful error messages, and unexpected behavior.

If you think that you have identified an issue with Cargo that might compromise its users' security, please do not open a public issue on GitHub. Instead, we ask you to refer to Rust's security policy.

Opening an issue is as easy as following this link. There are several templates for different issue kinds, but if none of them fit your issue, don't hesitate to modify one of the templates, or click the Open a blank issue link.

The Rust tools are spread across multiple repositories in the Rust organization. It may not always be clear where to file an issue. No worries! If you file in the wrong tracker, someone will either transfer it to the correct one or ask you to move it. Some other repositories that may be relevant are:

Issues with cargo fix can be tricky to know where they should be filed, since the fixes are driven by rustc, processed by rustfix, and the front-interface is implemented in Cargo. Feel free to file in the Cargo issue tracker, and it will get moved to one of the other issue trackers if necessary.

Issue labels

Issue labels are very helpful to identify the types of issues and which category they are related to. The Cargo team typically manages assigning labels. The labels use a naming convention with short prefixes and colors to indicate the kind of label:

  • Yellow, A-prefixed labels state which area of the project an issue relates to.

  • Light purple, C-prefixed labels represent the category of an issue. In particular, C-feature-request marks proposals for new features. If an issue is C-feature-request, but is not Feature accepted or I-nominated, then it was not thoroughly discussed, and might need some additional design or perhaps should be implemented as an external subcommand first. Ping @rust-lang/cargo if you want to send a PR for such issue.

  • Dark purple, Command-prefixed labels mean the issue has to do with a specific cargo command.

  • Green, E-prefixed labels indicate the level of experience or effort necessary to fix the issue. E-mentor issues also have some instructions on how to get started. Generally, all of the E-prefixed labels are issues that are ready for someone to contribute to!

  • Red, I-prefixed labels indicate the importance of the issue. The I-nominated label indicates that an issue has been nominated for prioritizing at the next triage meeting.

  • Purple gray, O-prefixed labels are the operating system or platform that this issue is specific to.

  • Orange, P-prefixed labels indicate a bug's priority.

  • S-prefixed labels are "status" labels, typically used for PRs, but can also indicate an issue is S-blocked.

  • The light orange relnotes label marks issues that should be highlighted in the Rust release notes of the next release.

  • Dark blue, Z-prefixed labels are for unstable, nightly features.

Process

This chapter gives an overview of how Cargo comes together, and how you can be a part of that process.

See the Working on Cargo chapter for an overview of the contribution process.

Cargo team

Cargo is managed by a team of volunteers. The Cargo Team reviews all changes, and sets the direction for the project.

The team meets on a weekly basis on a video chat. If you are interested in participating, feel free to contact us on Zulip.

Roadmap

The Cargo team typically establishes a roadmap each year that sets which areas they will be focusing on. This is usually posted on the Inside Rust Blog (such as the 2020 roadmap).

The Roadmap Project Board is used for tracking major initiatives. This gives an overview of the things the team is interested in and thinking about.

The RFC Project Board is used for tracking RFCs.

Working on small bugs

Issues labeled with the E-help-wanted, E-easy, or E-mentor labels are typically issues that the Cargo team wants to see addressed, and are relatively easy to get started with. If you are interested in one of those, and it has not already been assigned to someone, leave a comment. See Issue assignment below for assigning yourself.

If there is a specific issue that you are interested in, but it doesn't have one of the E- labels, leave a comment on the issue. If a Cargo team member has the time to help out, they will respond to help with the next steps.

Working on large bugs

Some issues may be difficult to fix. They may require significant code changes, or major design decisions. The E-medium and E-hard labels can be used to tag such issues. These will typically involve some discussion with the Cargo team on how to tackle it.

Working on small features

Small feature requests are typically managed on the issue tracker. Features that the Cargo team have approved will have the Feature accepted label or the E-mentor label. If there is a feature request that you are interested in, feel free to leave a comment expressing your interest. If a Cargo team member has the time to help out, they will respond to help with the next steps. Keep in mind that the Cargo team has limited time, and may not be able to help with every feature request. Most of them require some design work, which can be difficult. Check out the design principles chapter for some guidance.

Working on large features

Cargo follows the Rust model of evolution. Major features usually go through an RFC process. Therefore, before opening a feature request issue create a Pre-RFC thread on the internals forum to get preliminary feedback. Implementing a feature as a custom subcommand is encouraged as it helps demonstrate the demand for the functionality and is a great way to deliver a working solution faster as it can iterate outside of Cargo's release cadence.

See the unstable chapter for how new major features are typically implemented.

Bots and infrastructure

The Cargo project uses several bots:

Issue assignment

Normally, if you plan to work on an issue that has been marked with one of the E- tags or Feature accepted, it is sufficient just to leave a comment that you are working on it. We also have a bot that allows you to formally "claim" an issue by entering the text @rustbot claim in a comment. See the Assignment docs on how this works.

Working on Cargo

This chapter gives an overview of how to build Cargo, make a change, and submit a Pull Request.

  1. Check out the Cargo source.
  2. Building Cargo.
  3. Making a change.
  4. Writing and running tests.
  5. Submitting a Pull Request.
  6. The merging process.

Checkout out the source

We use the "fork and pull" model described here, where contributors push changes to their personal fork and create pull requests to bring those changes into the source repository. Cargo uses git and GitHub for all development.

  1. Fork the rust-lang/cargo repository on GitHub to your personal account (see GitHub docs).
  2. Clone your fork to your local machine using git clone (see GitHub docs)
  3. It is recommended to start a new branch for the change you want to make. All Pull Requests are made against the master branch.

Building Cargo

Cargo is built by...running cargo! There are a few prerequisites that you need to have installed:

  • rustc and cargo need to be installed. Cargo is expected to build and test with the current stable, beta, and nightly releases. It is your choice which to use. Nightly is recommended, since some nightly-specific tests are disabled when using the stable release. But using stable is fine if you aren't working on those.
  • A C compiler (typically gcc, clang, or MSVC).
  • git
  • Unix:
    • pkg-config
    • OpenSSL (libssl-dev on Ubuntu, openssl-devel on Fedora)
  • macOS:
    • OpenSSL (homebrew is recommended to install the openssl package)

If you can successfully run cargo build, you should be good to go!

Running Cargo

You can use cargo run to run cargo itself, or you can use the path directly to the cargo binary, such as target/debug/cargo.

If you are using rustup, beware that running the binary directly can cause issues with rustup overrides. Usually, when cargo is executed as part of rustup, the toolchain becomes sticky (via an environment variable), and all calls to rustc will use the same toolchain. But when cargo is not run via rustup, the toolchain may change based on the directory. Since Cargo changes the directory for each compilation, this can cause different calls to rustc to use different versions. There are a few workarounds:

  • Don't use rustup overrides.
  • Use rustup run target/debug/cargo to execute cargo.
  • Set the RUSTC environment variable to a specific rustc executable (not the rustup wrapper).
  • Create a custom toolchain. This is a bit of a hack, but you can create a directory in the rustup toolchains directory, and create symlinks for all the files and directories in there to your toolchain of choice (such as nightly), except for the cargo binary, which you can symlink to your target/debug/cargo binary in your project directory.

Normally, all development is done by running Cargo's test suite, so running it directly usually isn't required. But it can be useful for testing Cargo on more complex projects.

Making a change

Some guidelines on working on a change:

  • All code changes are expected to comply with the formatting suggested by rustfmt. You can use rustup component add rustfmt to install rustfmt and use cargo fmt to automatically format your code.
  • Commit as you go.
  • Include tests that cover all non-trivial code. See the Testing chapter for more about writing and running tests.
  • All code should be warning-free. This is checked during tests.

Submitting a Pull Request

After you have committed your work, and pushed it to GitHub, you can open a Pull Request

  • Push your commits to GitHub and create a pull request against Cargo's master branch.
  • Include a clear description of what the change is and why it is being made.
  • Use GitHub's keywords in the description to automatically link to an issue if the PR resolves the issue. For example Closes #1234 will link issue #1234 to the PR. When the PR is merged, GitHub will automatically close the issue.

The rust-highfive bot will automatically assign a reviewer for the PR. It may take at least a few days for someone to respond. If you don't get a response in over a week, feel free to ping the assigned reviewer.

When your PR is submitted, GitHub automatically runs all tests. The GitHub interface will show a green checkmark if it passes, or a red X if it fails. There are links to the logs on the PR page to diagnose any issues. The tests typically finish in under 30 minutes.

The reviewer might point out changes deemed necessary. Large or tricky changes may require several passes of review and changes.

The merging process

After a reviewer has approved your PR, they will issue a command to the bors bot (also known as "Homu", the software that powers @bors). Bors will create a temporary branch with your PR, and run all tests. Only if all tests pass will it merge the PR to master. If it fails, the bot will leave a comment on the PR. This system ensures that the master branch is always in a good state, and that merges are processed one at a time. The Homu queue dashboard shows the current merge queue. Cargo's queue is rarely busy, but a busy project like the rust repo is constantly full.

Assuming everything works, congratulations! It may take at least a week for the changes to arrive on the nightly channel. See the release chapter for more information on how Cargo releases are made.

Release process

Cargo is released with rustc using a "train model". After a change lands in Cargo's master branch, it will be synced with the rust-lang/rust repository by a Cargo team member, which happens about once a week. If there are complications, it can take longer. After it is synced and merged, the changes will appear in the next nightly release, which is usually published around 00:30 UTC.

After changes are in the nightly release, they will make their way to the stable release anywhere from 6 to 12 weeks later, depending on when during the cycle it landed.

The current release schedule is posted on the Rust Forge. See the release process for more details on how Rust's releases are created. Rust releases are managed by the Release team.

Build process

The build process for Cargo is handled as part of building Rust. Every PR on the rust-lang/rust repository creates a full collection of release artifacts for every platform. The code for this is in the dist bootstrap module. Every night at 00:00 UTC, the artifacts from the most recently merged PR are promoted to the nightly release channel. A similar process happens for beta and stable releases.

Version updates

Shortly after each major release, a Cargo team member will post a PR to update Cargo's version in Cargo.toml. Cargo's library is permanently unstable, so its version number starts with a 0. The minor version is always 1 greater than the Rust release it is a part of, so cargo 0.49.0 is part of the 1.48 Rust release. The CHANGELOG is also usually updated at this time.

Also, any version-specific checks that are no longer needed can be removed. For example, some tests are disabled on stable if they require some nightly behavior. Once that behavior is available on the new stable release, the checks are no longer necessary. (I usually search for the word "nightly" in the testsuite directory, and read the comments to see if any of those nightly checks can be removed.)

Sometimes Cargo will have a runtime check to probe rustc if it supports a specific feature. This is usually stored in the TargetInfo struct. If this behavior is now stable, those checks should be removed.

Cargo has several other packages in the crates/ directory. If any of these packages have changed, the version should be bumped before the beta release. It is rare that these get updated. Bumping these as-needed helps avoid churning incompatible version numbers. This process should be improved in the future!

Docs publishing

Docs are automatically published during the Rust release process. The nightly channel's docs appear at https://doc.rust-lang.org/nightly/cargo/. Once nightly is promoted to beta, those docs will appear at https://doc.rust-lang.org/beta/cargo/. Once the stable release is made, it will appear on https://doc.rust-lang.org/cargo/ (which is the "current" stable) and the release-specific URL such as https://doc.rust-lang.org/1.46.0/cargo/.

The code that builds the documentation is located in the doc bootstrap module.

crates.io publishing

Cargo's library is published to crates.io as part of the stable release process. This is handled by the Release team as part of their process. There is a publish.py script that in theory should help with this process. The test and build tool crates aren't published.

Beta backports

If there is a regression or major problem detected during the beta phase, it may be necessary to backport a fix to beta. The process is documented in the Beta Backporting page.

Stable backports

In (hopefully!) very rare cases, a major regression or problem may be reported after the stable release. Decisions about this are usually coordinated between the Release team and the Cargo team. There is usually a high bar for making a stable patch release, and the decision may be influenced by whether or not there are other changes that need a new stable release.

The process here is similar to the beta-backporting process. The rust-lang/cargo branch is the same as beta (rust-1.XX.0). The rust-lang/rust branch is called stable.

Unstable features

Most new features should go through the unstable process. This means that the feature will only be usable on the nightly channel, and requires a specific opt-in by the user. Small changes can skip this process, but please consult with the Cargo team first.

Unstable feature opt-in

For features that require behavior changes or new syntax in Cargo.toml, then it will need a cargo-features value placed at the top of Cargo.toml to enable it. The process for doing adding a new feature is described in the features module. Code that implements the feature will need to manually check that the feature is enabled for the current manifest.

For features that add new command-line flags, config options, or environment variables, then the -Z flags will be needed to enable them. The features module also describes how to add these. New flags should use the fail_if_stable_opt method to check if the -Z unstable-options flag has been passed.

Unstable documentation

Every unstable feature should have a section added to the unstable chapter describing how to use the feature.

-Z CLI flags should be documented in the built-in help in the cli module.

Tracking issues

Each unstable feature should get a tracking issue. These issues are typically created when a PR is close to being merged, or soon after it is merged. Use the tracking issue template when creating a tracking issue.

Larger features should also get a new label in the issue tracker so that when issues are filed, they can be easily tied together.

Stabilization

After some period of time, typically measured in months, the feature can be considered to be stabilized. The feature should not have any significant known bugs or issues, and any design concerns should be resolved.

The stabilization process depends on the kind of feature. For smaller features, you can leave a comment on the tracking issue expressing interest in stabilizing it. It can usually help to indicate that the feature has received some real-world testing, and has exhibited some demand for broad use.

For larger features that have not gone through the RFC process, then an RFC to call for stabilization might be warranted. This gives the community a final chance to provide feedback about the proposed design.

For a small feature, or one that has already gone through the RFC process, a Cargo Team member may decide to call for a "final comment period" using rfcbot. This is a public signal that a major change is being made, and gives the Cargo Team members an opportunity to confirm or block the change. This process can take a few days or weeks, or longer if a concern is raised.

Once the stabilization has been approved, the person who called for stabilization should prepare a PR to stabilize the feature. This PR should:

  • Flip the feature to stable in the features module.
  • Remove any unstable checks that aren't automatically handled by the feature system.
  • Move the documentation from the unstable chapter into the appropriate places in the Cargo book and man pages.
  • Remove the -Z flags and help message if applicable.
  • Update all tests to remove nightly checks.
  • Tag the PR with relnotes label if it seems important enough to highlight in the Rust release notes.

Architecture Overview

This chapter gives a very high-level overview of Cargo's architecture. This is intended to give you links into the code which is hopefully commented with more in-depth information.

If you feel something is missing that would help you, feel free to ask on Zulip.

Codebase Overview

This is a very high-level overview of the Cargo codebase.

  • src/bin/cargo — Cargo is split in a library and a binary. This is the binary side that handles argument parsing, and then calls into the library to perform the appropriate subcommand. Each Cargo subcommand is a separate module here. See SubCommands.

  • src/cargo/ops — Every major operation is implemented here. This is where the binary CLI usually calls into to perform the appropriate action.

    • src/cargo/ops/cargo_compile.rs — This is the entry point for all the compilation commands. This is a good place to start if you want to follow how compilation starts and flows to completion.
  • src/cargo/core/resolver — This is the dependency and feature resolvers.

  • src/cargo/core/compiler — This is the code responsible for running rustc and rustdoc.

    • src/cargo/core/compiler/build_context/mod.rs — The BuildContext is the result of the "front end" of the build process. This contains the graph of work to perform and any settings necessary for rustc. After this is built, the next stage of building is handled in Context.

    • src/cargo/core/compiler/context — The Context is the mutable state used during the build process. This is the core of the build process, and everything is coordinated through this.

    • src/cargo/core/compiler/fingerprint.rs — The fingerprint module contains all the code that handles detecting if a crate needs to be recompiled.

  • src/cargo/core/source — The Source trait is an abstraction over different sources of packages. Sources are uniquely identified by a SourceId. Sources are implemented in the src/cargo/sources directory.

  • src/cargo/util — This directory contains generally-useful utility modules.

  • src/cargo/util/config — This directory contains the config parser. It makes heavy use of serde to merge and translate config values. The Config is usually accessed from the Workspace, though references to it are scattered around for more convenient access.

  • src/cargo/util/toml — This directory contains the code for parsing Cargo.toml files.

  • src/doc — This directory contains Cargo's documentation and man pages.

  • src/etc — These are files that get distributed in the etc directory in the Rust release. The man pages are auto-generated by a script in the src/doc directory.

  • crates — A collection of independent crates used by Cargo.

SubCommands

Cargo is a single binary composed of a set of clap subcommands. All subcommands live in src/bin/cargo/commands directory. src/bin/cargo/main.rs is the entry point.

Each subcommand, such as src/bin/cargo/commands/build.rs, usually performs the following:

  1. Parse the CLI flags. See the command_prelude module for some helpers to make this easier.
  2. Load the config files.
  3. Discover and load the workspace.
  4. Calls the actual implementation of the subcommand which resides in src/cargo/ops.

If the subcommand is not found in the built-in list, then Cargo will automatically search for a subcommand named cargo-{NAME} in the users PATH to execute the subcommand.

Console Output

All of Cargo's output should go through the Shell struct. You can normally obtain the Shell instance from the Config struct. Do not use the std println! macros.

Most of Cargo's output goes to stderr. When running in JSON mode, the output goes to stdout.

It is important to properly handle errors when writing to the console. Informational commands, like cargo list, should ignore any errors writing the output. There are some drop_print macros that are intended to make this easier.

Messages written during compilation should handle errors, and abort the build if they are unable to be displayed. This is generally automatically handled in the JobQueue as it processes each message.

Errors

Cargo uses anyhow for managing errors. This makes it convenient to "chain" errors together, so that Cargo can report how an error originated, and what it was trying to do at the time.

Error helpers are implemented in the errors module. Use the InternalError error type for errors that are not expected to happen. This will print a message to the user to file a bug report.

The binary side of Cargo uses the CliError struct to wrap the process exit code. Usually Cargo exits with 101 for an error, but some commands like cargo test will exit with different codes.

Style

Some guidelines for Cargo's output:

  • Keep the normal output brief. Cargo is already fairly noisy, so try to keep the output as brief and clean as possible.
  • Good error messages are very important! Try to keep them brief and to the point, but good enough that a beginner can understand what is wrong and can figure out how to fix. It is a difficult balance to hit! Err on the side of providing extra information.
  • When using any low-level routines, such as std::fs, always add error context about what it is doing. For example, reading from a file should include context about which file is being read if there is an error.
  • Cargo's error style is usually a phrase, starting with a lowercase letter. If there is a longer error message that needs multiple sentences, go ahead and use multiple sentences. This should probably be improved sometime in the future to be more structured.

Debug logging

Cargo uses the env_logger crate to display debug log messages. The CARGO_LOG environment variable can be set to enable debug logging, with a value such as trace, debug, or warn. It also supports filtering for specific modules. Feel free to use the standard log macros to help with diagnosing problems.

# Outputs all logs with levels debug and higher
CARGO_LOG=debug cargo generate-lockfile

# Don't forget that you can filter by module as well
CARGO_LOG=cargo::core::resolver=trace cargo generate-lockfile

# This will print lots of info about the download process. `trace` prints even more.
CARGO_HTTP_DEBUG=true CARGO_LOG=cargo::ops::registry=debug cargo fetch

# This is an important command for diagnosing fingerprint issues.
CARGO_LOG=cargo::core::compiler::fingerprint=trace cargo build

Packages and Resolution

Workspaces

The Workspace object is usually created very early by calling the workspace helper method. This discovers the root of the workspace, and loads all the workspace members as a Package object. Each package corresponds to a single Cargo.toml (which is deserialized into a Manifest), and may define several Targets, such as the library, binaries, integration test or examples. Targets are crates (each target defines a crate root, like src/lib.rs or examples/foo.rs) and are what is actually compiled by rustc.

Packages and Sources

There are several data structures that are important to understand how packages are found and loaded:

  • Package — A package, which is a Cargo.toml manifest and its associated source files.
    • PackageId — A unique identifier for a package.
  • Source — An abstraction for something that can fetch packages (a remote registry, a git repo, the local filesystem, etc.). Check out the source implementations for all the details about registries, indexes, git dependencies, etc.
    • SourceId — A unique identifier for a source.
  • SourceMap — Map of all available sources.
  • PackageRegistry — This is the main interface for how the dependency resolver finds packages. It contains the SourceMap, and handles things like the [patch] table. The Registry trait provides a generic interface to the PackageRegistry, but this is only used for providing an alternate implementation of the PackageRegistry for testing. The dependency resolver sends a query to the PackageRegistry to "get me all packages that match this dependency declaration".
  • Summary — A summary is a subset of a Manifest, and is essentially the information that can be found in a registry index. Queries against the PackageRegistry yields a Summary. The resolver uses the summary information to build the dependency graph.
  • PackageSet — Contains all of the Package objects. This works with the Downloads struct to coordinate downloading packages. It has a reference to the SourceMap to get the Source objects which tell the Downloads struct which URLs to fetch.

All of these come together in the ops::resolve module. This module contains the primary functions for performing resolution (described below). It also handles downloading of packages. It is essentially where all of the data structures above come together.

Resolver

Resolve is the representation of a directed graph of package dependencies, which uses PackageIds for nodes. This is the data structure that is saved to the Cargo.lock file. If there is no lock file, Cargo constructs a resolve by finding a graph of packages which matches declared dependency specification according to SemVer.

ops::resolve is the front-end for creating a Resolve. It handles loading the Cargo.lock file, checking if it needs updating, etc.

Resolution is currently performed twice. It is performed once with all features enabled. This is the resolve that gets saved to Cargo.lock. It then runs again with only the specific features the user selected on the command-line. Ideally this second run will get removed in the future when transitioning to the new feature resolver.

Feature resolver

A new feature-specific resolver was added in 2020 which adds more sophisticated feature resolution. It is located in the resolver::features module. The original dependency resolver still performs feature unification, as it can help reduce the dependencies it has to consider during resolution (rather than assuming every optional dependency of every package is enabled). Checking if a feature is enabled must go through the new feature resolver.

Compilation

The Unit is the primary data structure representing a single execution of the compiler. It (mostly) contains all the information needed to determine which flags to pass to the compiler.

The entry to the compilation process is located in the cargo_compile module. The compilation can be conceptually broken into these steps:

  1. Perform dependency resolution (see the resolution chapter).
  2. Generate the root Units, the things the user requested to compile on the command-line. This is done in generate_targets.
  3. Starting from the root Units, generate the UnitGraph by walking the dependency graph from the resolver. The UnitGraph contains all of the Unit structs, and information about the dependency relationships between units. This is done in the unit_dependencies module.
  4. Construct the BuildContext with all of the information collected so far. This is the end of the "front end" of compilation.
  5. Create a Context, a large, mutable data structure that coordinates the compilation process.
  6. The Context will create a JobQueue, a data structure that tracks which units need to be built.
  7. drain_the_queue does the compilation process. This is the only point in Cargo that currently uses threads.
  8. The result of the compilation is stored in the Compilation struct. This can be used for various things, such as running tests after the compilation has finished.

Files

This chapter gives some pointers on where to start looking at Cargo's on-disk data file structures.

  • Layout is the abstraction for the target directory. It handles locking the target directory, and providing paths to the parts inside. There is a separate Layout for each "target".
  • Resolve contains the contents of the Cargo.lock file. See the encode module for the different Cargo.lock formats.
  • TomlManifest contains the contents of the Cargo.toml file. It is translated to a Manifest object for some simplification, and the Manifest is stored in a Package.
  • The fingerprint module deals with the fingerprint information stored in target/debug/.fingerprint. This tracks whether or not a crate needs to be rebuilt.
  • cargo install tracks its installed files with some metadata in $CARGO_HOME. The metadata is managed in the common_for_install_and_uninstall module.
  • Git sources are cached in $CARGO_HOME/git. The code for this cache is in the git source module.
  • Registries are cached in $CARGO_HOME/registry. There are three parts, the index, the compressed .crate files, and the extracted sources of those crate files.
    • Management of the registry cache can be found in the registry source module. Note that this includes an on-disk cache as an optimization for accessing the git repository.
    • Saving of .crate files is handled by the RemoteRegistry.
    • Extraction of .crate files is handled by the RegistrySource.
    • There is a lock for the package cache. Code must be careful, because this lock must be obtained manually. See Config::acquire_package_cache_lock.

Filesystems

Cargo tends to get run on a very wide array of file systems. Different file systems can have a wide range of capabilities, and Cargo should strive to do its best to handle them. Some examples of issues to deal with:

  • Not all file systems support locking. Cargo tries to detect if locking is supported, and if not, will ignore lock errors. This isn't ideal, but it is difficult to deal with.
  • The fs::canonicalize function doesn't work on all file systems (particularly some Windows file systems). If that function is used, there should be a fallback if it fails. This function will also return \\?\ style paths on Windows, which can have some issues (such as some tools not supporting them, or having issues with relative paths).
  • Timestamps can be unreliable. The fingerprint module has a deeper discussion of this. One example is that Docker cache layers will erase the fractional part of the time stamp.
  • Symlinks are not always supported, particularly on Windows.

Tests

Cargo has an extensive test suite. Most of it is implemented as integration tests in the testsuite directory. There are several other tests:

  • Unit tests are scattered throughout.
  • The dependency resolver has its own set of tests in the resolver-tests directory.
  • All of the packages in the crates directory have their own set of tests.
  • The build-std test is for the build-std feature. It is separate since it has some special requirements.
  • Documentation has a variety of tests, such as link validation, and the SemVer chapter validity checks.

Running Tests

Using cargo test is usually sufficient for running the full test suite. This can take a few minutes, so you may want to use more targeted flags to pick the specific test you want to run, such as cargo test --test testsuite -- check::check_success.

Running nightly tests

Some tests only run on the nightly toolchain, and will be ignored on other channels. It is recommended that you run tests with both nightly and stable to ensure everything is working as expected.

Some of the nightly tests require the rustc-dev and llvm-tools-preview rustup components installed. These components include the compiler as a library. This may already be installed with your nightly toolchain, but if it isn't, run rustup component add rustc-dev llvm-tools-preview --toolchain=nightly.

Running cross tests

Some tests exercise cross compiling to a different target. This will require you to install the appropriate target. This typically is the 32-bit target of your host platform. For example, if your host is a 64-bit x86_64-unknown-linux-gnu, then you should install the 32-bit target with rustup target add i686-unknown-linux-gnu. If you don't have the alternate target installed, there should be an error message telling you what to do. You may also need to install additional tools for the target. For example, on Ubuntu you should install the gcc-multilib package.

If you can't install an alternate target, you can set the CFG_DISABLE_CROSS_TESTS=1 environment variable to disable these tests. The Windows cross tests only support the MSVC toolchain.

Running build-std tests

The build-std tests are disabled by default, but you can run them by setting the CARGO_RUN_BUILD_STD_TESTS=1 environment variable and running cargo test --test build-std. This requires the nightly channel, and also requires the rust-src component installed with rustup component add rust-src --toolchain=nightly.

Writing Tests

The following focuses on writing an integration test. However, writing unit tests is also encouraged!

Testsuite

Cargo has a wide variety of integration tests that execute the cargo binary and verify its behavior, located in the testsuite directory. The support crate contains many helpers to make this process easy.

These tests typically work by creating a temporary "project" with a Cargo.toml file, executing the cargo binary process, and checking the stdout and stderr output against the expected output.

cargo_test attribute

Cargo's tests use the #[cargo_test] attribute instead of #[test]. This attribute injects some code which does some setup before starting the test, creating the little "sandbox" described below.

Basic test structure

The general form of a test involves creating a "project", running cargo, and checking the result. Projects are created with the ProjectBuilder where you specify some files to create. The general form looks like this:

let p = project()
    .file("src/main.rs", r#"fn main() { println!("hi!"); }"#)
    .build();

The project creates a mini sandbox under the "cargo integration test" directory with each test getting a separate directory such as /path/to/cargo/target/cit/t123/. Each project appears as a separate directory. There is also an empty home directory created that will be used as a home directory instead of your normal home directory.

If you do not specify a Cargo.toml manifest using file(), one is automatically created with a project name of foo using basic_manifest().

To run Cargo, call the cargo method and make assertions on the execution:

p.cargo("run --bin foo")
    .with_stderr(
        "\
[COMPILING] foo [..]
[FINISHED] [..]
[RUNNING] `target/debug/foo`
",
    )
    .with_stdout("hi!")
    .run();

This uses the Execs struct to build up a command to execute, along with the expected output.

See support::lines_match for an explanation of the string pattern matching. Patterns are used to make it easier to match against the expected output.

Browse the pub functions in the support crate for a variety of other helpful utilities.

Testing Nightly Features

If you are testing a Cargo feature that only works on "nightly" Cargo, then you need to call masquerade_as_nightly_cargo on the process builder like this:

p.cargo("build").masquerade_as_nightly_cargo()

If you are testing a feature that only works on nightly rustc (such as benchmarks), then you should exit the test if it is not running with nightly rust, like this:

if !is_nightly() {
    // Add a comment here explaining why this is necessary.
    return;
}

Platform-specific Notes

When checking output, use / for paths even on Windows: the actual output of \ on Windows will be replaced with /.

Be careful when executing binaries on Windows. You should not rename, delete, or overwrite a binary immediately after running it. Under some conditions Windows will fail with errors like "directory not empty" or "failed to remove" or "access is denied".

Specifying Dependencies

You should not write any tests that use the network such as contacting crates.io. Typically, simple path dependencies are the easiest way to add a dependency. Example:

let p = project()
    .file("Cargo.toml", r#"
        [package]
        name = "foo"
        version = "1.0.0"

        [dependencies]
        bar = {path = "bar"}
    "#)
    .file("src/lib.rs", "extern crate bar;")
    .file("bar/Cargo.toml", &basic_manifest("bar", "1.0.0"))
    .file("bar/src/lib.rs", "")
    .build();

If you need to test with registry dependencies, see support::registry::Package for creating packages you can depend on.

If you need to test git dependencies, see support::git to create a git dependency.

Profiling

Internal profiler

Cargo has a basic, hierarchical profiler built-in. The environment variable CARGO_PROFILE can be set to an integer which specifies how deep in the profile stack to print results for.

# Output first three levels of profiling info
CARGO_PROFILE=3 cargo generate-lockfile

Informal profiling

The overhead for starting a build should be kept as low as possible (preferably, well under 0.5 seconds on most projects and systems). Currently, the primary parts that affect this are:

  • Running the resolver.
  • Querying the index.
  • Checking git dependencies.
  • Scanning the local project.
  • Building the unit dependency graph.

We currently don't have any automated systems or tools for measuring or tracking the startup time. We informally measure these on changes that are likely to affect the performance. Usually this is done by measuring the time for cargo build to finish in a large project where the build is fresh (no actual compilation is performed). Hyperfine is a command-line tool that can be used to roughly measure the difference between different commands and settings.

Design Principles

The purpose of Cargo is to formalize a canonical Rust workflow, by automating the standard tasks associated with distributing software. Cargo simplifies structuring a new project, adding dependencies, writing and running unit tests, and more.

Cargo is not intended to be a general-purpose build tool. Ideally, it should be easy to integrate it within another build tool, though admittedly that is not as seamless as desired.

Stability and compatibility

Backwards compatibility

Cargo strives to remain backwards compatible with projects created in previous versions. The CLI interface also strives to remain backwards compatible, such that the commands and options behave the same. That being said, changes in behavior, and even outright breakage are sometimes done in limited situations. The following outlines some situations where backwards-incompatible changes are made:

  • Anything that addresses a security concern.
  • Dropping support for older platforms and tooling. Cargo follows the Rust tiered platform support.
  • Changes to resolve possibly unsafe or unreliable behavior.

None of these changes should be taken lightly, and should be avoided if possible, or possibly with some transition period to alert the user of the potential change.

Behavior is sometimes changed in ways that have a high confidence that it won't break existing workflows. Almost every change carries this risk, so it is often a judgment call balancing the benefit of the change with the perceived possibility of its negative consequences.

At times, some changes fall in the gray area, where the current behavior is undocumented, or not working as intended. These are more difficult judgment calls. The general preference is to balance towards avoiding breaking existing workflows.

Support for older registry APIs and index formats may be dropped, if there is high confidence that there aren't any active registries that may be affected. This has never (to my knowledge) happened so far, and is unlikely to happen in the future, but remains a possibility.

In all of the above, a transition period may be employed if a change is known to cause breakage. A warning can be issued to alert the user that something will change, and provide them with an alternative to resolve the issue (preferably in a way that is compatible across versions if possible).

Cargo is only expected to work with the version of the related Rust tools (rustc, rustdoc, etc.) that it is released with. As a matter of choice, the latest nightly works with the most recent stable release, but that is mostly to accommodate development of Cargo itself, and should not be expected by users.

Forwards compatibility

Additionally, Cargo strives a limited degree of forwards compatibility. Changes should not egregiously prevent older versions from working. This is mostly relevant for persistent data, such as on-disk files and the registry interface and index. It also applies to a lesser degree to the registry API.

Changes to Cargo.lock require a transition time, where the new format is not automatically written when the lock file is updated. The transition time should not be less than 6 months, though preferably longer. New projects may use the new format in a shorter time frame.

Changes to Cargo.toml can be made in any release. This is because the user must manually modify the file, and opt-in to any new changes. Additionally, Cargo will usually only issue a warning about new fields it doesn't understand, but otherwise continue to function.

Changes to cache files (such as artifacts in the target directory, or cached data in Cargo's home directory) should not prevent older versions from running, but they may cause older versions to recreate the cache, which may result in a performance impact.

Changes to the registry index should not prevent older versions from working. Generally, older versions ignore new fields, so the format should be easily extensible. Changes to the format or interpretation of existing fields should be done very carefully to avoid preventing older versions of Cargo from working. In some cases, this may mean that older versions of Cargo will not be able to select a newly published crate, but it shouldn't prevent them from working at all. This level of compatibility may not last forever, but the exact time frame for such a change has not yet been decided.

The registry API may be changed in such a way to prevent older versions of Cargo from working. Generally, compatibility should be retained for as long as possible, but the exact length of time is not specified.

Simplicity and layers

Standard workflows should be easy and consistent. Each knob that is added has a high cost, regardless if it is intended for a small audience. Layering and defaults can help avoid the surface area that the user needs to be concerned with. Try to avoid small functionalities that may have complex interactions with one another.