accelforge.frontend.mapper package#

Submodules#

accelforge.frontend.mapper.ffm module#

class accelforge.frontend.mapper.ffm.FFM[source]#

Bases: EvalableModel

Configuration for the Fast and Fusiest Mapper.

force_memory_hierarchy_order: bool#

If set to true, storage nodes for lower-level memories must be placed below storage nodes for higher-level memories. For example, all MainMemory storage nodes must go above all LocalBuffer storage nodes.

This constraint always applies to same-tensor storage nodes (e.g., MainMemory reusing Output must go above LocalBuffer reusing Output); turning it off will permit things like MainMemory reusing Output going above LocalBuffer reusing Input.

info_metrics: Metrics#

Metrics to be reported for final mappings.

max_fused_loops: float | int#

The maximum total number of fused loops in a pmapping.

max_fused_loops_per_rank_variable: int#

The maximum number of fused loops in a pmapping for a given rank variable.

max_loops: float | int#

The maximum total loops in a pmapping.

max_loops_minus_ranks: float | int#

The maximum total loops in a pmapping minus the number of ranks. For example, 3 means that the number of loops can be up to (the number of ranks + 3).

max_pmapping_templates_per_einsum: float | int#

The maximum number of pmapping templates per Einsum. Once this many templates are generated, the mapper will stop generating more. This is useful for debugging (why are so many templates being generated?).

memory_limit: float | int#

The maximum memory limit for the mapper.

memory_limit_per_process: float | int#

The maximum memory limit per process for one of the mapper’s processes.

metrics: Metrics#

Metrics used to optimize mappings.

objective_tolerance: float#

Reduces memory usage and runtime for the mapper. When set to a nonzero value, the mapper may return mappings up to (1 + tolerance)× optimal. Also see resource_usage_tolerance to further reduce mapper memory usage and runtime.

out_of_order_hierarchy_explore_removing_spatials_for_more_temporals: bool#

If force_memory_hierarchy_order is set to False or is set to False for any particular component, and a spatial fanout ends up being raised above a storage node that does not have that fanout, then there may be cases where a spatial loop is put above a component that does not have the associated fanout.

When this happens, we may not put between the spatial and the storage node any temporal loops that affect the same indexing expressions as the spatial loops.

For example, the following is not allowed:

Arch:

  • Global Buffer

  • 2x fanout

  • Register

Mapping:

spatial-for-reg n in [0, 10):
[Register reuses input]
for n in [0, 2):

[Global Buffer reuses output]

By default, if there are spatial loops that are not constrained away, then the mapper will not explore putting any temporal loops that conflict. In the above example, it will never place the above temporal loop. If this is set to True, then the mapper will explore removing the spatial loop in order to allow for the temporal loop to be placed. In the above example, it will explore removing the spatial loop in order to allow for the temporal loop to be placed.

prioritize_reuse_of_unfused_tensors: bool#

If set to True, then for all memory levels, the mapper will place the storage nodes of unfused tensors above those of fused tensors. This is overridden if there is any tensor_order_options specified for a memory level. The result of this is that the mapper will avoid mappings that repeatedly fetch unfused tensors in order to allow for smaller tiles of fused tensors. This may lead to better mappings, but slows down the mapper.

resource_usage_tolerance: float#

Reduces memory usage and runtime for the mapper. When set to a nonzero value, the mapper may drop mappings with resource usage > (1 - tolerance)× optimal. The mapper is guaranteed to return all Pareto-optimal mappings with resource usage below this, and perhaps more. If Metrics.RESOURCE_USAGE is set, then this is ignored. Setting this, as well as objective_tolerance, to a greater-than-zero value will reduce memory usage for the mapper.

time_limit: float | int#

The maximum time limit for the mapper.

time_limit_per_pmapping_template: float | int#

The maximum time limit per pmapping template.

accelforge.frontend.mapper.mapper module#

class accelforge.frontend.mapper.mapper.Mapper[source]#

Bases: EvalableModel

ffm: FFM#

Fast and Fusiest Mapper configuration. Currently the only supported mapper.

accelforge.frontend.mapper.metrics module#

class accelforge.frontend.mapper.metrics.Metrics[source]#

Bases: Flag

Metrics used to optimize mappings or reported by model.

ACTIONS = 32#

Action counts.

DETAILED_MEMORY_USAGE = 64#

Memory usage broken down by tensor and Einsum.

DYNAMIC_ENERGY = 4#

The amount of dynamic energy consumed by the workload.

ENERGY = 2#

The amount of energy consumed by the workload.

LATENCY = 1#

The amount of time taken to execute the workload.

LEAK_ENERGY = 8#

The amount of leak energy consumed by the workload.

RESOURCE_USAGE = 16#

The amount of resources used by the workload.

When used as a mapper objective, this objective is multivariate, and must consider every resource available to the hardware.

__new__(value)#
classmethod all_metrics()[source]#
includes_dynamic_energy()[source]#

Returns True if the metrics include dynamic energy, either alone or as part of total energy. False otherwise.

Return type:

bool

includes_leak_energy()[source]#

Returns True if the metrics include leak energy, either alone or as part of total energy. False otherwise.

Return type:

bool

Module contents#

class accelforge.frontend.mapper.FFM[source]#

Bases: EvalableModel

Configuration for the Fast and Fusiest Mapper.

force_memory_hierarchy_order: bool#

If set to true, storage nodes for lower-level memories must be placed below storage nodes for higher-level memories. For example, all MainMemory storage nodes must go above all LocalBuffer storage nodes.

This constraint always applies to same-tensor storage nodes (e.g., MainMemory reusing Output must go above LocalBuffer reusing Output); turning it off will permit things like MainMemory reusing Output going above LocalBuffer reusing Input.

info_metrics: Metrics#

Metrics to be reported for final mappings.

max_fused_loops: float | int#

The maximum total number of fused loops in a pmapping.

max_fused_loops_per_rank_variable: int#

The maximum number of fused loops in a pmapping for a given rank variable.

max_loops: float | int#

The maximum total loops in a pmapping.

max_loops_minus_ranks: float | int#

The maximum total loops in a pmapping minus the number of ranks. For example, 3 means that the number of loops can be up to (the number of ranks + 3).

max_pmapping_templates_per_einsum: float | int#

The maximum number of pmapping templates per Einsum. Once this many templates are generated, the mapper will stop generating more. This is useful for debugging (why are so many templates being generated?).

memory_limit: float | int#

The maximum memory limit for the mapper.

memory_limit_per_process: float | int#

The maximum memory limit per process for one of the mapper’s processes.

metrics: Metrics#

Metrics used to optimize mappings.

objective_tolerance: float#

Reduces memory usage and runtime for the mapper. When set to a nonzero value, the mapper may return mappings up to (1 + tolerance)× optimal. Also see resource_usage_tolerance to further reduce mapper memory usage and runtime.

out_of_order_hierarchy_explore_removing_spatials_for_more_temporals: bool#

If force_memory_hierarchy_order is set to False or is set to False for any particular component, and a spatial fanout ends up being raised above a storage node that does not have that fanout, then there may be cases where a spatial loop is put above a component that does not have the associated fanout.

When this happens, we may not put between the spatial and the storage node any temporal loops that affect the same indexing expressions as the spatial loops.

For example, the following is not allowed:

Arch:

  • Global Buffer

  • 2x fanout

  • Register

Mapping:

spatial-for-reg n in [0, 10):
[Register reuses input]
for n in [0, 2):

[Global Buffer reuses output]

By default, if there are spatial loops that are not constrained away, then the mapper will not explore putting any temporal loops that conflict. In the above example, it will never place the above temporal loop. If this is set to True, then the mapper will explore removing the spatial loop in order to allow for the temporal loop to be placed. In the above example, it will explore removing the spatial loop in order to allow for the temporal loop to be placed.

prioritize_reuse_of_unfused_tensors: bool#

If set to True, then for all memory levels, the mapper will place the storage nodes of unfused tensors above those of fused tensors. This is overridden if there is any tensor_order_options specified for a memory level. The result of this is that the mapper will avoid mappings that repeatedly fetch unfused tensors in order to allow for smaller tiles of fused tensors. This may lead to better mappings, but slows down the mapper.

resource_usage_tolerance: float#

Reduces memory usage and runtime for the mapper. When set to a nonzero value, the mapper may drop mappings with resource usage > (1 - tolerance)× optimal. The mapper is guaranteed to return all Pareto-optimal mappings with resource usage below this, and perhaps more. If Metrics.RESOURCE_USAGE is set, then this is ignored. Setting this, as well as objective_tolerance, to a greater-than-zero value will reduce memory usage for the mapper.

time_limit: float | int#

The maximum time limit for the mapper.

time_limit_per_pmapping_template: float | int#

The maximum time limit per pmapping template.

class accelforge.frontend.mapper.Mapper[source]#

Bases: EvalableModel

ffm: FFM#

Fast and Fusiest Mapper configuration. Currently the only supported mapper.

class accelforge.frontend.mapper.Metrics[source]#

Bases: Flag

Metrics used to optimize mappings or reported by model.

ACTIONS = 32#

Action counts.

DETAILED_MEMORY_USAGE = 64#

Memory usage broken down by tensor and Einsum.

DYNAMIC_ENERGY = 4#

The amount of dynamic energy consumed by the workload.

ENERGY = 2#

The amount of energy consumed by the workload.

LATENCY = 1#

The amount of time taken to execute the workload.

LEAK_ENERGY = 8#

The amount of leak energy consumed by the workload.

RESOURCE_USAGE = 16#

The amount of resources used by the workload.

When used as a mapper objective, this objective is multivariate, and must consider every resource available to the hardware.

__new__(value)#
classmethod all_metrics()[source]#
includes_dynamic_energy()[source]#

Returns True if the metrics include dynamic energy, either alone or as part of total energy. False otherwise.

Return type:

bool

includes_leak_energy()[source]#

Returns True if the metrics include leak energy, either alone or as part of total energy. False otherwise.

Return type:

bool