1.6.14.1. mapotf: A Terraform Metaprogramming Tool
In large-scale Infrastructure as Code (IaC) projects, teams often face the challenge of how to uniformly manage and modify Terraform configurations. Examples include adding modtm calls to all modules, modifying tags sent to modtm, or mandating minimum versions for AzureRM / AzAPI Providers. Certain limitations within Terraform itself can also lead to awkward scenarios in daily use. For instance, some users wish to enable ignore_changes on resources within a Terraform module to ignore drifts in specific attributes, but Terraform currently does not support using variables within lifecycle arguments like ignore_changes. This means open-source module developers cannot use parameterization to flexibly meet different users' needs for attribute ignoring. Furthermore, different module developers might independently implement common design patterns (such as creating private endpoints for databases or configuring logging for storage buckets), resulting in slight variations across modules. If a unified pattern library existed, module authors wouldn't need to search for examples or tutorials everywhere; they could simply apply the shared patterns.
To address these pain points, the Microsoft Azure team released an open-source tool named mapotf. mapotf stands for "MetA PrOgramming for TerraForm". As the name suggests, it aims to provide a metaprogramming mechanism for Terraform, allowing for programmatic modifications to Terraform configurations. Simply put, mapotf is a metaprogramming tool used in conjunction with Terraform. With mapotf, developers and platform engineering teams can dynamically generate, modify, or remove Terraform configurations without directly altering the source code, thereby satisfying scenarios where Terraform's native capabilities are limited.
1.6.14.1.1. Roles and Functions of mapotf
The core idea of mapotf is to treat Terraform configurations as objects that can be matched and rewritten, applying changes in batches through declarative rules. mapotf provides two main types of blocks: Data Sources (data) and Transforms (transform). Users define specific data blocks in a mapotf configuration file (written in HCL) to match certain target elements within the Terraform configuration; they then define how to rewrite these matched parts via transform blocks. During the transformation phase, mapotf supports various operations, such as in-place updates of existing attribute values, insertion of new resources or blocks, and deletion of specified configuration fragments. All of this is automatically handled by the mapotf engine based on the configuration rules, effectively providing an intelligent "find and replace" mechanism for Terraform configurations.
Through such metaprogramming capabilities, mapotf enables many powerful use cases:
Supporting customizations natively unsupported by Terraform: addressing the previously mentioned
ignore_changesscenario,mapotfcan dynamically add the requiredignore_changesattributes to resource blocks without changing the module source code, thus bypassing Terraform's limitations. As themapotfauthor mentioned in a related Terraform Issue: "Through metaprogramming, we can customize and modify any configuration in the root module and dependent modules." This allows module authors to provide an optional external patch, letting users apply it as needed. For example, some users employ custom Azure Policies or AWS Config to automatically correct resources, causing resource attributes to be modified by the platform and resulting in drift. Withmapotf, different users can define their own lists of attributes to ignore, without the module having to hardcode for every possibility.Promoting common patterns and best practices:
mapotfcan serve as a pattern library tool. For common architectural requirements (such as adding monitoring diagnostics to specific resources, configuring private links, etc.), platform teams can pre-writemapotfconfigurations to encapsulate these proven patterns. When designing modules, developers do not need to implement these patterns from scratch; they simply reference the correspondingmapotfconfiguration when needed to automatically apply the pattern to the module. This reduces repetitive work and ensures consistency across different modules. For example, different enterprises and teams might all try to write modules containing Azure Storage Account services. Regardless of how the module is implemented, they might all need to add a Private Endpoint for the Storage Account. In this case, we can write amapotfconfiguration to matchazurerm_storage_accountresource blocks in the code and generate the corresponding Private Endpoint resources for them.Centralized Governance: For Terraform modules used on a large scale within an organization,
mapotfprovides a means to apply changes centrally. Platform governance teams (DevOps/SRE) can maintain a set ofmapotfrules to uniformly adjust all modules in bulk. For example, requiring all module resources to add a specific tag, or inserting an extra resource for auditing/telemetry. Withmapotf, such changes can be executed automatically across multiple modules at once without manually modifying each module one by one, greatly improving governance efficiency.Batch Upgrade of Module Configurations: When underlying Cloud Providers release major version updates for their Terraform Providers, they often introduce breaking changes, forcing module code to be adjusted.
mapotfis highly effective in this regard. By writing a set of rules mapping old configurations to new requirements,mapotfcan automatically refactor Terraform configurations to comply with the upgrade guide of the new Provider version. For instance, in the case of upgrading the AzureRM Provider from 3.x to 4.x, one can write correspondingmapotfupgrade code to batch process resource attribute renaming, field deletions, etc. A practical tool in this area is TerraformConfigWelder, which utilizes themapotfconfiguration mechanism to perform Provider upgrade transformations. In other words,mapotfprovides "batch scaffolding" functionality for module maintainers, allowing for quick completion and reduced risk of error when large-scale mechanical replacements are needed (e.g., when a Provider drops support for a field).
To summarize mapotf more vividly: If a Terraform Module is a set of reusable Terraform configurations, then mapotf is a set of reusable change patterns for Terraform code.
1.6.14.1.2. Usage
Installing mapotf is very simple and can be done via the Go command to get the latest version:
go install github.com/Azure/mapotf@latest
Once installed, mapotf works as a command-line tool. It operates as a wrapper around Terraform and does not replace Terraform's own functionality. The typical workflow for mapotf is as follows:
- Prepare mapotf Configuration: Users need to provide the "metaprogramming" code for
mapotf. This can be a local folder or a reference to a configuration directory in a remote Git repository. The--mptf-dirparameter specifies the configuration source. For example, Azure provides some official example configuration repositories that can be referenced directly via Git URL. These configuration files are written in standard HCL syntax, containing severaldatablocks and correspondingtransformblocks that define the targets to match and the transformation instructions. - Execute Transformation (transform):
mapotfmainly provides two execution modes: - Apply Immediately (
apply): Using themapotf applycommand executes the transformation and immediately callsterraform applyto deploy the changes. In this mode,mapotffirst downloads and loads the specified configuration rules, backs up the current Terraform files, modifies the current Terraform files (e.g., inserting or updating code snippets), and then automatically triggers Terraform's Plan and Apply operations, redirecting Stdout and Stderr to the Terraform process. After the Terraform process ends, it restores the Terraform files to their pre-transformation state and deletes the backup files. - Transform Only (
transform): Using themapotf transformcommand only executes the code transformation without invoking the subsequent Terraform deployment.mapotfkeeps the modified Terraform files and generates a backup copy (extension.tf.mptfbackup) for each changed file for user reference or rollback. In this mode, users can review the code differences themselves, then manually executeterraform plan/applyto apply them. If issues are found,mapotf resetcan quickly restore the files. Once the changes are confirmed as necessary and correct,mapotf clean-backupcan be used to clear all backup files, keeping the codebase clean.
- View and Verify Changes: After the transformation is executed, whether in
applyortransformmode,mapotfwill reflect the results directly in the Terraform code. For example, in the AKS cluster example mentioned earlier,mapotfadds themicrosoft_defender[0].log_analytics_workspace_idattribute to theignore_changeslist in the lifecycle configuration of theazurerm_kubernetes_clusterresource. Users can open the corresponding.tffile to see these new configurations inserted, while the original content is backed up in.tf.mptfbackupfiles. Only when the user confirms the execution of the Terraform deployment (or saves the changes) do these modifications truly take effect. If one chooses not to apply halfway through (e.g., answering "no" duringmapotf apply),mapotfautomatically restores all files and deletes backups, ensuring the codebase is not left in an unexpected intermediate state.
In this way, mapotf automates operations that would otherwise require manual editing of Terraform code. Developers can use it as an assistive tool during development to adjust configurations temporarily as needed, or integrate it into CI/CD pipelines to execute standardized transformations in bulk. It is worth noting that since mapotf directly modifies Terraform configuration files, the resulting changes should be included in version control and reviewed to ensure modifications to the infrastructure are traceable and expected. Also, mapotf's changes to Terraform are not limited to .tf files in the current folder; by setting the -r parameter, it recursively transforms all involved Terraform module code, allowing users to dynamically customize third-party open-source module code.
1.6.14.1.3. Integration with Pre-commit and Application in AVM
mapotf can not only be executed manually by developers but also integrates well into code management workflows. For instance, in the Azure Verified Modules (AVM) module development framework, mapotf is used as part of the pre-commit step to automatically apply prescribed configuration transformations before developers commit code. The Terraform Module Scaffold (tfmod-scaffold) provided by Azure includes a pre-commit hook script that calls mapotf to execute centrally defined transformation rules. The process is as follows:
- When a developer prepares to commit AVM module code, the pre-commit script runs
mapotf transform, pointing to a set of remote rules maintained by AVM (stored in theavm_mapotf/pre_commitdirectory). These rules cover unified changes required by AVM for all modules, such as the implantation of Telemetry code and consistency adjustments for module/Provider versions. By hosting rules centrally, the AVM team can update these transformation logics in the central repository, and all modules will automatically fetch and apply the latest rules the next time pre-commit is executed, ensuring immediate enforcement of governance policies. - After applying transformations,
mapotfcallsavmfixfor further processing, and then usesmapotf clean-backupto remove backup files. Cleaning backups is necessary because, in a pre-commit scenario, the transformed code is intended to be committed directly to the repository, so redundant backup files are not needed. The entire process is transparent to the developer—unless the rules cause code changes that haven't been staged, in which case the pre-commit check will report an error asking to re-add the modified files. This mechanism ensures that every committed code has been standardized by `mapotf`.
In the governance of Azure Verified Modules, mapotf plays a core role:
- Telemetry Implantation: Every AVM Terraform module needs to include a special deployment (like
main.telemetry.tf) to identify the deployment of that module. This GUID-identified telemetry allows Microsoft to count module usage frequency without collecting specific resource content, complying with privacy requirements. To ensure all modules correctly include Telemetry deployment and maintain consistency, the AVM team usesmapotfto uniformly insert or update this code. When the Telemetry mechanism needs adjustment (e.g., changing the GUID generation method or adding fields), simply updating themapotfrule applies it to all modules in bulk, achieving a "change once, effective everywhere" effect without manually editing each module. - Unified Provider Upgrade: When Providers like AzureRM release major version updates, AVM requires all official modules to follow suit to leverage new features and maintain support. With
mapotf, the AVM team can publish corresponding transformation rules to uniformly upgrade module configurations. These rules can cover: modifying version constraints inrequired_providers, replacing deprecated resource types or attributes, and adjusting internal module logic to be compatible with the new version. For example, migrating from AzureRM 3.x to 4.x requires replacing several resource names and parameters;mapotfupgrade rules perform batch replacements and deletions on module code according to official guidelines. Module maintainers simply run the pre-commit or a dedicated upgrade script to automatically complete most of the renovation work, followed by manual verification of the few parts that cannot be handled automatically. This centralized upgrade approach significantly reduces communication costs and the probability of errors associated with decentralized maintenance.
Overall, mapotf provides a centralized control, decentralized execution mechanism within the Azure Verified Modules ecosystem: governors define rules centrally, and each module executes rules via mapotf in its own codebase, implementing standard distribution and enforcement. This greatly enhances the platform team's control over the module ecosystem while retaining the flexibility of module development.
1.6.14.1.4. Real-World Application Scenarios
Combined with some examples, we can further understand the practical uses of mapotf:
- Customization of Attribute Changes (
ignore_changes): In themapotf_demo/ignore_changesexample, it demonstrates howmapotfdynamically addsignore_changessettings to Terraform resources to ignore variations in specific attributes.
data "resource" "resource_group" {
resource_type = "azurerm_resource_group"
}
transform "update_in_place" resource_group_ignore_changes {
for_each = try(data.resource.resource_group.result.azurerm_resource_group, {})
target_block_address = each.value.mptf.block_address
asstring {
lifecycle {
ignore_changes = "[\ntags, ${trimprefix(try(each.value.lifecycle.0.ignore_changes, "[\n]"), "[")}"
}
}
}
This code matches resource blocks of type azurerm_resource_group and dynamically adds tags to the ignore_changes list.
This corresponds to the aforementioned scenario: certain infrastructure settings are automatically modified by external policies. If not ignored, Terraform tries to correct them every plan. With mapotf, users can append configurations to ignore these attributes as needed without modifying the module source code. For instance, in the Azure AKS module example, mapotf successfully added microsoft_defender[0].log_analytics_workspace_id to the AKS cluster resource's ignore_changes list. This allows module users to conveniently avoid meaningless Terraform resource updates, thereby accommodating configuration drift caused by custom enterprise policies.
- Massive Module Governance Transformation: In the
mapotf_demo/massive_terraform_module_governingexample, we can see howmapotfis used to batch adjust code across multiple modules.
data "resource" azapi_resource {
resource_type = "azapi_resource"
}
locals {
azapi_resource_blocks = data.resource.azapi_resource.result.azapi_resource
azapi_resource_map = { for _, block in local.azapi_resource_blocks : block.mptf.block_address => block }
payload = jsonencode({
avm = "true"
})
compact_payload = replace(replace(replace(replace(local.payload, " ", ""), "\n", ""), "\r", ""), "\t", "")
create_headers = {
for address, block in local.azapi_resource_map :
address => try(replace(replace(replace(replace(block.create_headers, " ", ""), "\n", ""), "\r", ""), "\t", ""), "")
}
delete_headers = {
for address, block in local.azapi_resource_map :
address => try(replace(replace(replace(replace(block.delete_headers, " ", ""), "\n", ""), "\r", ""), "\t", ""), "")
}
read_headers = {
for address, block in local.azapi_resource_map :
address => try(replace(replace(replace(replace(block.read_headers, " ", ""), "\n", ""), "\r", ""), "\t", ""), "")
}
update_headers = {
for address, block in local.azapi_resource_map :
address => try(replace(replace(replace(replace(block.update_headers, " ", ""), "\n", ""), "\r", ""), "\t", ""), "")
}
}
transform "update_in_place" headers {
for_each = local.azapi_resource_map
target_block_address = each.key
asstring {
create_headers = try(strcontains(local.create_headers[each.key], local.compact_payload), false) ? each.value.create_headers : try(each.value.create_headers == "" || each.value.create_headers == null, true) ? local.payload : "merge(${each.value.create_headers}, ${local.payload})"
delete_headers = try(strcontains(local.delete_headers[each.key], local.compact_payload), false) ? each.value.delete_headers : try(each.value.delete_headers == "" || each.value.delete_headers == null, true) ? local.payload : "merge(${each.value.delete_headers}, ${local.payload})"
read_headers = try(strcontains(local.read_headers[each.key], local.compact_payload), false) ? each.value.read_headers : try(each.value.read_headers == "" || each.value.read_headers == null, true) ? local.payload : "merge(${each.value.read_headers}, ${local.payload})"
update_headers = try(strcontains(local.update_headers[each.key], local.compact_payload), false) ? each.value.update_headers : try(each.value.update_headers == "" || each.value.update_headers == null, true) ? local.payload : "merge(${each.value.update_headers}, ${local.payload})"
}
}
The code above matches all resource blocks of the AzAPI Provider, dynamically configuring the create_headers / delete_headers / read_headers / update_headers attributes. By tracking these attributes, Azure officials can identify which requests sent to the Azure API originate from AVM modules, including specifically which modules and versions.
Assuming an organization requires all official modules to add a standard configuration (like the previously mentioned Telemetry deployment or a unified tagging policy), mapotf can insert the corresponding snippet into each module's code via a single command, achieving a "one-click bulk change". This demonstrates mapotf's value in module governance: whether it's a dozen or hundreds of modules, they can undergo unified transformation through centrally defined rules, avoiding the time consumption and potential omissions of manual modification.
- Major Provider Upgrade Transformation: The
mapotf_demo/terraform_provider_major_upgradeexample demonstrates the process ofmapotfassisting in Terraform Provider version upgrades. When migrating modules from an old version of a Provider to a new one,mapotf, in conjunction with tools likeTerraformConfigWelder, systematically replaces and adjusts module code according to the upgrade guide. For example, when the AzureRM Provider version jumps, attribute names of certain resource blocks might change, or legacy parameters might need deletion. By using pre-writtenmapotftransformation rules, these replacement/deletion operations can be executed automatically (runningmapotf transform --mptf-dir git::https://github.com/lonegunmanb/TerraformConfigWelder.git//azurerm/v3_v4 --tf-dir .), leaving the developer with only a small amount of manual checking. Real-world cases prove that usingmapotffor upgrade renovations can automate tedious repetitive work, significantly reducing the time required for upgrades.
In the mapotf examples given above, we can see that, similar to grept, mapotf is deliberately designed to be highly compatible with Terraform syntax. Proficient Terraform users can easily master mapotf syntax in a short time and begin metaprogramming.
In summary, as a metaprogramming tool for Terraform, mapotf offers unique value to Terraform users, module developers, and platform governance teams. for ordinary users, it provides a way to extend Terraform capabilities, meeting special needs (like ignoring configuration drift) with temporary adjustments. For module developers, mapotf is a powerful tool to enhance module flexibility, allowing complex or optional logic to reside outside the module and be applied by users on demand, lightening the burden on the module itself. Finally, for DevOps / SRE teams responsible for platform governance, mapotf is a great helper for centralized control, enabling the rapid distribution of global consistency requirements (such as telemetry, upgrades, security policies) without interrupting the autonomous evolution of individual modules.