Developing Plugins
Packer is extensible and supports plugins that let you create and use custom builders, provisioners, post-processors, and data sources. This page explains how to develop Packer plugins. Before you begin, we recommend reviewing the Packer documentation and the instructions for installing external plugins.
Warning This is an advanced topic. You should have strong knowledge of Packer before you start writing plugins.
Language Requirements
You must write Packer plugins in Go.
Plugin System Architecture
A Packer plugin is just a Go binary. Instead of loading plugins directly into a running application, Packer runs each plugin as a separate application. The multiple separate Packer plugin processes communicate with the Core using an RPC defined in the packer-plugin SDK. The Packer core itself is responsible launching and cleaning up the plugin processes.
Plugin Development Basics
The components that can be created and used in a Packer plugin are builders, provisioners, post-processors, and data sources.
Each of these components has a corresponding interface.
All you need to do to create a plugin is:
- create an implementation of the desired interface, and
- serve it using the server provided in the packer-plugin-sdk.
The core and the SDK handle all of the communication details inside the server.
Your plugin must use two packages from the SDK to implement the server and interfaces. You're encouraged to use whatever other packages you want in your plugin implementation. Because plugins are their own processes, there is no danger of colliding dependencies.
github.com/hashicorp/packer-plugin-sdk/packer
- Contains all the interfaces that you have to implement for any given plugin.github.com/hashicorp/packer-plugin-sdk/plugin
- Contains the code to serve the plugin. This handles all the inter-process communication.
Basic examples of serving your component are shown below.
// main.go
import (
"github.com/hashicorp/packer-plugin-sdk/plugin"
)
// Assume this implements the packer.Builder interface
type ExampleBuilder struct{}
// Assume this implements the packer.PostProcessor interface
type FooPostProcessor struct{}
// Assume this implements the packer.Provisioner interface
type BarProvisioner struct{}
func main() {
pps := plugin.NewSet()
pps.RegisterBuilder("example", new(ExampleBuilder))
pps.RegisterBuilder(plugin.DEFAULT_NAME, new(AnotherBuilder))
pps.RegisterPostProcessor("foo", new(FooPostProcessor))
pps.RegisterProvisioner("bar", new(BarProvisioner))
err := pps.Run()
if err != nil {
fmt.Fprintln(os.Stderr, err.Error())
os.Exit(1)
}
}
This plugin.NewSet
invocation handles all the details of communicating with
Packer core and serving your component over RPC. As long as your struct being
registered implements one of the component interfaces, Packer will now be able
to launch your plugin and use it.
If you register a component with its own name, the component name will be
appended to the plugin name to create a unique name. If you register a component
using the special string constant plugin.DEFAULT_NAME
, then the component will
be referenced by using only the plugin name. For example:
If your plugin is named packer-plugin-my
, the above set definition would make
the following components available:
- the
my-example
builder - the
my
builder - the
my-foo
post-processor - the
my-bar
provisioner
Next, build your plugin as you would any other Go application. The resulting binary is the plugin that can be installed using standard installation procedures.
This documentation explains how to implement each type of plugin interface: builders, data sources, provisioners, and post-processors.
Lock your dependencies! Using go mod
is highly recommended since
the Packer codebase will continue to improve, potentially breaking APIs along
the way until there is a stable release. By locking your dependencies, your
plugins will continue to work with the version of Packer you lock to.
Logging and Debugging
Plugins can use the standard Go log
package to log. Anything logged using
this will be available in the Packer log files automatically. The Packer log is
visible on stderr when the PACKER_LOG
environment var is set.
Packer will prefix any logs from plugins with the path to that plugin to make it identifiable where the logs come from. Some example logs are shown below:
2013/06/10 21:44:43 Loading builder: custom
2013/06/10 21:44:43 packer-builder-custom: 2013/06/10 21:44:43 Plugin minimum port: 10000
2013/06/10 21:44:43 packer-builder-custom: 2013/06/10 21:44:43 Plugin maximum port: 25000
2013/06/10 21:44:43 packer-builder-custom: 2013/06/10 21:44:43 Plugin address: :10000
As you can see, the log messages from the custom builder plugin are prefixed with "packer-builder-custom". Log output is extremely helpful in debugging issues and you're encouraged to be as verbose as you need to be in order for the logs to be helpful.
Creating a GitHub Release
packer init
does not work using a centralized registry. Instead, it requires
you to publish your plugin in a GitHub repo with the name
packer-plugin-*
where * represents the name of your plugin. You also need to
create a GitHub release of your plugin with specific assets for the
packer init
download to work. We provide a pre-defined release workflow
configuration using
GitHub Actions. We
strongly encourage maintainers to use this configuration to make sure the
release contains the right assets with the right names for Packer to leverage
packer init
installation.
Here's what you need to create releases using GitHub Actions:
Generate a GPG key to be used when signing releases (See GitHub's detailed instructions for help with this step)
Copy the GoReleaser configuration from the packer-plugin-scaffolding repository to the root of your repository.
curl -L -o .goreleaser.yml \ https://raw.githubusercontent.com/hashicorp/packer-plugin-scaffolding/main/.goreleaser.yml
Copy the GitHub Actions workflow from the packer-plugin-scaffolding repository to
.github/workflows/release.yml
in your repository.mkdir -p .github/workflows && curl -L -o .github/workflows/release.yml \ https://raw.githubusercontent.com/hashicorp/packer-plugin-scaffolding/main/.github/workflows/release.yml
gpg --full-generate-key
Choose all default options if available
Export GPG key
$ gpg --list-secret-keys --keyid-format=long /Users/hubot/.gnupg/secring.gpg ------------------------------------ sec 4096R/3AA5C34371567BD2 2016-03-10 [expires: 2017-03-10] uid Hubot <hubot@example.com> ssb 4096R/4BB6D45482678BE3 2016-03-10 gpg --armor --export-secret-keys 3AA5C34371567BD2
Go to your repository page on GitHub and navigate to Settings > Secrets. Add the following secrets:
GPG_PRIVATE_KEY
- Your ASCII-armored GPG private key. You can export this withgpg --armor --export-secret-keys [key ID or email]
.GPG_PASSPHRASE
- The passphrase for your GPG private key.
Push a new valid version tag (e.g.
v1.2.3
) to test that the GitHub Actions releaser is working. The tag can be created usinggit tag -a v1.2.3 -m "v1.2.3"
must be a valid Semantic Version preceded with av
. Once the tag is pushed, the github actions you just configured will automatically build release binaries that Packer can download usingpacker init
. For more details on how to install a plugin usingpacker init
, see the init docs.
Registering Plugins
Note: Registering a plugin as an integration requires the documentation to match the Scaffolding example layout.
To help with the discovery of Packer plugins, plugins maintainers can choose to register their plugin as a Packer Integration.
The registration process requires metadata configuration to be added to your plugin repository for configuring the Packer integration pipeline and a specific directory structure for plugin documentation to be rendered on the Packer Integrations portal.
You can execute the following steps to register your plugin as an integration:
- Update your plugin documentation structure to match the Scaffolding example layout. New plugins generated from this template have the necessary structure in place. If so you can jump to step 3.
- For the integrations library, only one top-level README per integration is supported. Any top-level index.mdx files that exist within a plugin's existing documentation will need to migrate to a top-level README.
- Update your top-level integration README to include a description, plugin installation steps, available components section, and, any, additional sections needed to inform users on how to work with your integration. Refer to Packer scaffolding plugin for an example.
- Update the top-level README for each of the components within your integration to follow the structure defined in the scaffolding template.
- Add the integration configuration file metadata.hcl to the plugins
.web-docs
directory. - Open a request for integration issue with the Packer team - Open Request. Provide all the requested information to help expedite the integration request.
[Example] Add integration files to existing plugin repository
## Update Plugin repository with integration config, workflows, and scripts
cd packer-plugin-name
mkdir -p .web-docs/scripts
# Download packer-plugin-scaffolding repo copy files
wget https://github.com/hashicorp/packer-plugin-scaffolding/archive/refs/heads/main.zip
unzip main.zip
cp packer-plugin-scaffolding-main/.web-docs/metadata.hcl .web-docs/
cp -r packer-plugin-scaffolding-main/.web-docs/scripts/ .web-docs/scripts/
cp packer-plugin-scaffolding-main/.github/workflows/notify-integration-release-via-* .github/workflows/
# Remove downloaded scaffolding project
rm main.zip
rm -rf packer-plugin-scaffolding-main
# Add the following commands to your plugin GNUmakefile
generate: install-packer-sdc
@go generate ./...
@rm -rf .docs
@packer-sdc renderdocs -src docs -partials docs-partials/ -dst .docs/
@./.web-docs/scripts/compile-to-webdocs.sh "." ".docs" ".web-docs" "<orgname>"
@rm -r ".docs"
By opening an integration request, you are asking a member of the to Packer team to review your plugin integration configuration, plugin documentation, and, finally, to open an internal pull-request to finalize the integration setup.
Plugin integrations will be listed as a Packer Integration, with details on how to install and use your the plugin.
Plugin integrations, once deployed, can be updated manually, or automatically upon a new release, by the plugin authors. Changes to the defined documentation structure or parent repository should be communicated to the Packer team to ensure a working integration pipeline.
Plugin Development Tips and FAQs
Working Examples
Here's a non-exhaustive list of Packer plugins that you can check out:
- github.com/hashicorp/packer-plugin-docker
- github.com/exoscale/packer-plugin-exoscale
- github.com/sylviamoss/packer-plugin-comment
Looking at their code will give you good examples.
Naming Conventions
It is standard practice to name the resulting plugin application in the format
of packer-plugin-NAME
. For example, if you're building a new builder for
CustomCloud, it would be standard practice to name the resulting plugin
packer-plugin-customcloud
. This naming convention helps users identify the
scope of a plugin.
Testing Plugins
Making your unpublished plugin available to Packer is possible by either:
- Starting Packer from the directory where the plugin binary is located.
- Putting the plugin binary in the same directory as Packer.
In both these cases, if the binary is called packer-plugin-myawesomecloud
and
defines an ebs
builder then you will be able to use an myawesomecloud-ebs
builder or source without needing to have a required_plugin
block.
This is extremely useful during development.
Distributing Plugins
We recommend that you use a tool like the GoReleaser in order to cross-compile your plugin for every platform that Packer supports, since Go applications are platform-specific. If you have created your plugin from the packer-plugin-scaffolding repo, simply tagging a commit and pushing the tag to GitHub will correctly build and release the binaries using GoReleaser.