Using Moose with PyMoose Computations#

This example demonstrates how to convert a PyMoose computation (i.e. a function decorated with pm.computation) to its equivalent in textual form. The resulting text file can be used directly with Moose command line tools to analyze, compile, and evaluate computations.

Table of Contents
  1. Generating a Moose computation in textual form
  2. Using Elk to compile and analyze the textual form
  3. Evaluating the textual form against a Moose runtime
import pathlib

import numpy as np

import pymoose as pm
from pymoose.computation import utils

Generating a Moose computation in textual form#

We want to work with some secure computation, e.g. the function below that securely computes a dot-product between two public constants.

FIXED = pm.fixed(24, 40)

player0 = pm.host_placement("player0")
player1 = pm.host_placement("player1")
player2 = pm.host_placement("player2")
repl = pm.replicated_placement("replicated", [player0, player1, player2])

@pm.computation
def my_computation():
    with player0:
        x = pm.constant(np.array([1., 2., 3.]).reshape((1, 3)))
        x = pm.cast(x, dtype=FIXED)
    with player1:
        w = pm.constant(np.array([4., 5., 6.]).reshape((3, 1)))
        w = pm.cast(w, dtype=FIXED)
    
    with repl:
        y_hat = pm.dot(x, w)

    with player2:
        result = pm.cast(y_hat, dtype=pm.float64)
    return result

(Note that using constants this way is not generally secure, since constants are embedded in the computation graph in plaintext. This is just a simple, pedagogical example of a replicated computation.)

Next we’ll implement a function that writes this high-level, abstract computation into a text file at some location filepath.

def comp_to_moose(computation_func, filepath):
    traced_comp: pm.edsl.base.AbstractComputation = pm.trace(computation_func)
    comp_bin: bytes = utils.serialize_computation(traced_comp)
    rust_comp: pm.MooseComputation = pm.elk_compiler.compile_computation(comp_bin, passes=[])
    textual_comp: str = rust_comp.to_textual()
    with open(filepath, "w") as f:
        f.write(textual_comp)

Use of the compile_computation function here might seem a bit strange at first glance. Since we are passing an empty list of compiler passes to run (passes=[]), this will be a no-op for the Elk compiler. But by using compile_computation we are implicitly marshaling the Python computation into its canonical Moose computation in Rust, since this function is actually a Rust binding. We get a MooseComputation object back, which is just a reference to the Rust-managed Moose computation.

This object has a few methods, including to_textual(), which parses the computation into its canonical string. We call this string the “textual form” or “textual representation” of a Moose computation. When computations in textual form are written to a file, we use the .moose extension by convention. Working with computations in textual form allows us to use them with the lower-level Moose and Elk machinery that the Moose command line tools offer, since those are built with the textual representation in mind.

After writing the computation to disk, we’ll exclusively use Moose command line tools to work with it.

this_dir = pathlib.Path.cwd()
comp_to_moose(my_computation, this_dir / "dotprod.moose")
print(this_dir)
/home/docs/checkouts/readthedocs.org/user_builds/pymoose/checkouts/latest/pymoose/docs/source

Using Elk to compile and analyze the textual form#

Since we passed an empty list of compiler passes instead of using the default passes, this computation remains un-compiled, and its textual form reflects that:

!cat dotprod.moose
constant_0 = Constant{value = HostFloat64Tensor([[1.0, 2.0, 3.0]])}: () -> Tensor<Float64> () @Host(player0)
constant_1 = Constant{value = HostFloat64Tensor([[4.0], [5.0], [6.0]])}: () -> Tensor<Float64> () @Host(player1)
cast_1 = Cast: (Tensor<Float64>) -> Tensor<Fixed128(24, 40)> (constant_1) @Host(player1)
cast_0 = Cast: (Tensor<Float64>) -> Tensor<Fixed128(24, 40)> (constant_0) @Host(player0)
dot_0 = Dot: (Tensor<Fixed128(24, 40)>, Tensor<Fixed128(24, 40)>) -> Tensor<Fixed128(24, 40)> (cast_0, cast_1) @Replicated(player0, player1, player2)
cast_2 = Cast: (Tensor<Fixed128(24, 40)>) -> Tensor<Float64> (dot_0) @Host(player2)
output_0 = Output{tag = "output_0"}: (Tensor<Float64>) -> Tensor<Float64> (cast_2) @Host(player2)

For example, all our non-constant values in the computation have generic type Tensor<T> – this is a higher-level type that is erased during the Lowering pass of Elk’s default compilation. Another example is that our dot product is placed on the @Replicated(player0, player1, player2) placement. Replicated placements are virtual placements, meaning their operations will always have to be compiled into a series of operations against a concrete placement like HostPlacement before the computation can be evaluated against a Moose runtime.

Next, let’s run a lowering pass on our computation, to see what a compiled computation might look like:

!elk compile dotprod.moose --passes lowering
op_0 = Constant{value = HostFloat64Tensor([[1.0, 2.0, 3.0]])}: () -> HostFloat64Tensor () @Host(player0)
op_1 = Constant{value = HostFloat64Tensor([[4.0], [5.0], [6.0]])}: () -> HostFloat64Tensor () @Host(player1)
op_2 = RingFixedpointEncode{scaling_base = 2, scaling_exp = 40}: (HostFloat64Tensor) -> HostRing128Tensor (op_1) @Host(player1)
op_3 = RingFixedpointEncode{scaling_base = 2, scaling_exp = 40}: (HostFloat64Tensor) -> HostRing128Tensor (op_0) @Host(player0)
op_4 = PrfKeyGen: () -> HostPrfKey () @Host(player0)
op_5 = PrfKeyGen: () -> HostPrfKey () @Host(player1)
op_6 = PrfKeyGen: () -> HostPrfKey () @Host(player2)
op_7 = Shape: (HostRing128Tensor) -> HostShape (op_3) @Host(player0)
op_8 = DeriveSeed{sync_key = 1c91a80c975d3ba4df3b9f040575b11f}: (HostPrfKey) -> HostSeed (op_4) @Host(player0)
op_9 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_7, op_8) @Host(player0)
op_10 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_3, op_9) @Host(player0)
op_11 = DeriveSeed{sync_key = 1c91a80c975d3ba4df3b9f040575b11f}: (HostPrfKey) -> HostSeed (op_4) @Host(player2)
op_12 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (op_7) @Host(player2)
op_13 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_7, op_11) @Host(player2)
op_14 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (op_7) @Host(player1)
op_15 = Shape: (HostRing128Tensor) -> HostShape (op_2) @Host(player1)
op_16 = DeriveSeed{sync_key = a9e2b07e381d47987c6fde7005af7a35}: (HostPrfKey) -> HostSeed (op_5) @Host(player1)
op_17 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_15, op_16) @Host(player1)
op_18 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_2, op_17) @Host(player1)
op_19 = DeriveSeed{sync_key = a9e2b07e381d47987c6fde7005af7a35}: (HostPrfKey) -> HostSeed (op_5) @Host(player0)
op_20 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (op_15) @Host(player0)
op_21 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_15, op_19) @Host(player0)
op_22 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (op_15) @Host(player2)
op_23 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_9, op_20) @Host(player0)
op_24 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_9, op_21) @Host(player0)
op_25 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_23, op_24) @Host(player0)
op_26 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_10, op_20) @Host(player0)
op_27 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_25, op_26) @Host(player0)
op_28 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_10, op_17) @Host(player1)
op_29 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_10, op_18) @Host(player1)
op_30 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_28, op_29) @Host(player1)
op_31 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_14, op_17) @Host(player1)
op_32 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_30, op_31) @Host(player1)
op_33 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_12, op_18) @Host(player2)
op_34 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_12, op_22) @Host(player2)
op_35 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_33, op_34) @Host(player2)
op_36 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_13, op_18) @Host(player2)
op_37 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_35, op_36) @Host(player2)
op_38 = Shape: (HostRing128Tensor) -> HostShape (op_27) @Host(player0)
op_39 = Shape: (HostRing128Tensor) -> HostShape (op_32) @Host(player1)
op_40 = Shape: (HostRing128Tensor) -> HostShape (op_37) @Host(player2)
op_41 = DeriveSeed{sync_key = 08b0b9540043dd1894648fd606bf680c}: (HostPrfKey) -> HostSeed (op_4) @Host(player0)
op_42 = DeriveSeed{sync_key = b2d302a76625d5cf5796ee5aac276b37}: (HostPrfKey) -> HostSeed (op_5) @Host(player0)
op_43 = DeriveSeed{sync_key = b2d302a76625d5cf5796ee5aac276b37}: (HostPrfKey) -> HostSeed (op_5) @Host(player1)
op_44 = DeriveSeed{sync_key = 3a099c3ac008d8416dc793242c5e6868}: (HostPrfKey) -> HostSeed (op_6) @Host(player1)
op_45 = DeriveSeed{sync_key = 3a099c3ac008d8416dc793242c5e6868}: (HostPrfKey) -> HostSeed (op_6) @Host(player2)
op_46 = DeriveSeed{sync_key = 08b0b9540043dd1894648fd606bf680c}: (HostPrfKey) -> HostSeed (op_4) @Host(player2)
op_47 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_38, op_41) @Host(player0)
op_48 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_38, op_42) @Host(player0)
op_49 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_47, op_48) @Host(player0)
op_50 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_39, op_43) @Host(player1)
op_51 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_39, op_44) @Host(player1)
op_52 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_50, op_51) @Host(player1)
op_53 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_40, op_45) @Host(player2)
op_54 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_40, op_46) @Host(player2)
op_55 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_53, op_54) @Host(player2)
op_56 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_27, op_49) @Host(player0)
op_57 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_32, op_52) @Host(player1)
op_58 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_37, op_55) @Host(player2)
op_59 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_56, op_57) @Host(player0)
op_60 = Shape: (HostRing128Tensor) -> HostShape (op_59) @Host(player0)
op_61 = Sample{}: (HostShape) -> HostRing128Tensor (op_60) @Host(player2)
op_62 = Shr{amount = 127}: (HostRing128Tensor) -> HostRing128Tensor (op_61) @Host(player2)
op_63 = Shl{amount = 1}: (HostRing128Tensor) -> HostRing128Tensor (op_61) @Host(player2)
op_64 = Shr{amount = 41}: (HostRing128Tensor) -> HostRing128Tensor (op_63) @Host(player2)
op_65 = PrfKeyGen: () -> HostPrfKey () @Host(player2)
op_66 = DeriveSeed{sync_key = 230fddec519fda8e1696d16deb283833}: (HostPrfKey) -> HostSeed (op_65) @Host(player2)
op_67 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_60, op_66) @Host(player2)
op_68 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_61, op_67) @Host(player2)
op_69 = DeriveSeed{sync_key = 4bc836f2610307e801619deb4bb02761}: (HostPrfKey) -> HostSeed (op_65) @Host(player2)
op_70 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_60, op_69) @Host(player2)
op_71 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_64, op_70) @Host(player2)
op_72 = DeriveSeed{sync_key = bad95aac5edf6f9eba56546ef215c7b2}: (HostPrfKey) -> HostSeed (op_65) @Host(player2)
op_73 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_60, op_72) @Host(player2)
op_74 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_62, op_73) @Host(player2)
op_75 = Fill{value = Ring128(1)}: (HostShape) -> HostRing128Tensor (op_60) @Host(player0)
op_76 = Shl{amount = 126}: (HostRing128Tensor) -> HostRing128Tensor (op_75) @Host(player0)
op_77 = Shl{amount = 86}: (HostRing128Tensor) -> HostRing128Tensor (op_75) @Host(player0)
op_78 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_59, op_76) @Host(player0)
op_79 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_78, op_67) @Host(player0)
op_80 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_58, op_68) @Host(player1)
op_81 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_79, op_80) @Host(player0)
op_82 = Shl{amount = 1}: (HostRing128Tensor) -> HostRing128Tensor (op_81) @Host(player0)
op_83 = Shr{amount = 41}: (HostRing128Tensor) -> HostRing128Tensor (op_82) @Host(player0)
op_84 = Shr{amount = 127}: (HostRing128Tensor) -> HostRing128Tensor (op_81) @Host(player0)
op_85 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_73, op_84) @Host(player0)
op_86 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_73, op_84) @Host(player0)
op_87 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_74, op_84) @Host(player1)
op_88 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_85, op_86) @Host(player0)
op_89 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_74, op_87) @Host(player1)
op_90 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_73, op_84) @Host(player0)
op_91 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_74, op_84) @Host(player1)
op_92 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_88, op_90) @Host(player0)
op_93 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_89, op_91) @Host(player1)
op_94 = Shl{amount = 87}: (HostRing128Tensor) -> HostRing128Tensor (op_92) @Host(player0)
op_95 = Shl{amount = 87}: (HostRing128Tensor) -> HostRing128Tensor (op_93) @Host(player1)
op_96 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_83, op_70) @Host(player0)
op_97 = Neg: (HostRing128Tensor) -> HostRing128Tensor (op_71) @Host(player1)
op_98 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_96, op_94) @Host(player0)
op_99 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_97, op_95) @Host(player1)
op_100 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_98, op_77) @Host(player0)
op_101 = PrfKeyGen: () -> HostPrfKey () @Host(player2)
op_102 = DeriveSeed{sync_key = a9bb62cb38ce0b1a41054b82e9d93306}: (HostPrfKey) -> HostSeed (op_101) @Host(player2)
op_103 = DeriveSeed{sync_key = c4a3c5edf66ce8d480e301c40749ec61}: (HostPrfKey) -> HostSeed (op_101) @Host(player2)
op_104 = Shape: (HostRing128Tensor) -> HostShape (op_100) @Host(player0)
op_105 = Shape: (HostRing128Tensor) -> HostShape (op_99) @Host(player1)
op_106 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_104, op_102) @Host(player0)
op_107 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_105, op_103) @Host(player1)
op_108 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_104, op_102) @Host(player2)
op_109 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_104, op_103) @Host(player2)
op_110 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_100, op_106) @Host(player0)
op_111 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_99, op_107) @Host(player1)
op_112 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_110, op_111) @Host(player0)
op_113 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_108, op_112) @Host(player2)
op_114 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_113, op_109) @Host(player2)
op_115 = RingFixedpointDecode{scaling_base = 2, scaling_exp = 40}: (HostRing128Tensor) -> HostFloat64Tensor (op_114) @Host(player2)
op_116 = Output{tag = "output_0"}: (HostFloat64Tensor) -> HostFloat64Tensor (op_115) @Host(player2)

Using Elk to get an evaluation-ready computation#

Examining this compiled computation, we can see that all types are concrete, e.g. they all have the Host prefix, denoting that they are values owned by particular host devices. All operations are also pinned against HostPlacements; there are no more virtual placements.

However, this computation is not yet ready for evaluation as we can see by examining the following two lines in the compiled output:

op_7 = Shape: (HostRing128Tensor) -> HostShape (op_1) @Host(player0)
...
op_12 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (op_7) @Host(player2)

Taking this apart step-by-step, we can see that op_7 evaluates the shape of a tensor located on player0, then op_12 uses that shape to generate a tensor on player2.

Since this is the first time the output of op_7 is used outside of player0, we should ask ourselves: how will this shape data make its way from player0 to player2 at runtime? The answer is that we have to run another compiler pass to insert the networking ops that will be required at runtime, which we can do now:

!elk compile dotprod.moose --passes lowering,networking
op_0 = Constant{value = HostFloat64Tensor([[1.0, 2.0, 3.0]])}: () -> HostFloat64Tensor () @Host(player0)
op_1 = Constant{value = HostFloat64Tensor([[4.0], [5.0], [6.0]])}: () -> HostFloat64Tensor () @Host(player1)
op_2 = RingFixedpointEncode{scaling_base = 2, scaling_exp = 40}: (HostFloat64Tensor) -> HostRing128Tensor (op_1) @Host(player1)
op_3 = RingFixedpointEncode{scaling_base = 2, scaling_exp = 40}: (HostFloat64Tensor) -> HostRing128Tensor (op_0) @Host(player0)
op_4 = PrfKeyGen: () -> HostPrfKey () @Host(player0)
op_5 = PrfKeyGen: () -> HostPrfKey () @Host(player1)
op_6 = PrfKeyGen: () -> HostPrfKey () @Host(player2)
op_7 = Shape: (HostRing128Tensor) -> HostShape (op_3) @Host(player0)
op_8 = DeriveSeed{sync_key = 56fa034cef0c2dfa42245e65af8d8292}: (HostPrfKey) -> HostSeed (op_4) @Host(player0)
op_9 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_7, op_8) @Host(player0)
op_10 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_3, op_9) @Host(player0)
op_11 = DeriveSeed{sync_key = 56fa034cef0c2dfa42245e65af8d8292}: (HostPrfKey) -> HostSeed (receive_0) @Host(player2)
op_12 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (receive_4) @Host(player2)
op_13 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (receive_4, op_11) @Host(player2)
op_14 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (receive_3) @Host(player1)
op_15 = Shape: (HostRing128Tensor) -> HostShape (op_2) @Host(player1)
op_16 = DeriveSeed{sync_key = c03d3f64819089474d759c1f6b0cefb7}: (HostPrfKey) -> HostSeed (op_5) @Host(player1)
op_17 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_15, op_16) @Host(player1)
op_18 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_2, op_17) @Host(player1)
op_19 = DeriveSeed{sync_key = c03d3f64819089474d759c1f6b0cefb7}: (HostPrfKey) -> HostSeed (receive_1) @Host(player0)
op_20 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (receive_7) @Host(player0)
op_21 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (receive_7, op_19) @Host(player0)
op_22 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (receive_6) @Host(player2)
op_23 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_9, op_20) @Host(player0)
op_24 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_9, op_21) @Host(player0)
op_25 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_23, op_24) @Host(player0)
op_26 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_10, op_20) @Host(player0)
op_27 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_25, op_26) @Host(player0)
op_28 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_5, op_17) @Host(player1)
op_29 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_5, op_18) @Host(player1)
op_30 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_28, op_29) @Host(player1)
op_31 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_14, op_17) @Host(player1)
op_32 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_30, op_31) @Host(player1)
op_33 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_12, receive_8) @Host(player2)
op_34 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_12, op_22) @Host(player2)
op_35 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_33, op_34) @Host(player2)
op_36 = Dot: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_13, receive_8) @Host(player2)
op_37 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_35, op_36) @Host(player2)
op_38 = Shape: (HostRing128Tensor) -> HostShape (op_27) @Host(player0)
op_39 = Shape: (HostRing128Tensor) -> HostShape (op_32) @Host(player1)
op_40 = Shape: (HostRing128Tensor) -> HostShape (op_37) @Host(player2)
op_41 = DeriveSeed{sync_key = 179d427663dc2fc82c3327213fa6bfaf}: (HostPrfKey) -> HostSeed (op_4) @Host(player0)
op_42 = DeriveSeed{sync_key = 5d1af3ee2bd5fc12d76b83f363d46463}: (HostPrfKey) -> HostSeed (receive_1) @Host(player0)
op_43 = DeriveSeed{sync_key = 5d1af3ee2bd5fc12d76b83f363d46463}: (HostPrfKey) -> HostSeed (op_5) @Host(player1)
op_44 = DeriveSeed{sync_key = a8dc975663e53ed620bda8edacfbfec1}: (HostPrfKey) -> HostSeed (receive_2) @Host(player1)
op_45 = DeriveSeed{sync_key = a8dc975663e53ed620bda8edacfbfec1}: (HostPrfKey) -> HostSeed (op_6) @Host(player2)
op_46 = DeriveSeed{sync_key = 179d427663dc2fc82c3327213fa6bfaf}: (HostPrfKey) -> HostSeed (receive_0) @Host(player2)
op_47 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_38, op_41) @Host(player0)
op_48 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_38, op_42) @Host(player0)
op_49 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_47, op_48) @Host(player0)
op_50 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_39, op_43) @Host(player1)
op_51 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_39, op_44) @Host(player1)
op_52 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_50, op_51) @Host(player1)
op_53 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_40, op_45) @Host(player2)
op_54 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_40, op_46) @Host(player2)
op_55 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_53, op_54) @Host(player2)
op_56 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_27, op_49) @Host(player0)
op_57 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_32, op_52) @Host(player1)
op_58 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_37, op_55) @Host(player2)
op_59 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_56, receive_9) @Host(player0)
op_60 = Shape: (HostRing128Tensor) -> HostShape (op_59) @Host(player0)
op_61 = Sample{}: (HostShape) -> HostRing128Tensor (receive_11) @Host(player2)
op_62 = Shr{amount = 127}: (HostRing128Tensor) -> HostRing128Tensor (op_61) @Host(player2)
op_63 = Shl{amount = 1}: (HostRing128Tensor) -> HostRing128Tensor (op_61) @Host(player2)
op_64 = Shr{amount = 41}: (HostRing128Tensor) -> HostRing128Tensor (op_63) @Host(player2)
op_65 = PrfKeyGen: () -> HostPrfKey () @Host(player2)
op_66 = DeriveSeed{sync_key = b9f398a60efc76e01b47977fce06b767}: (HostPrfKey) -> HostSeed (op_65) @Host(player2)
op_67 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (receive_11, op_66) @Host(player2)
op_68 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_61, op_67) @Host(player2)
op_69 = DeriveSeed{sync_key = 0dbbc6e4905d4cc458452d93dbcda785}: (HostPrfKey) -> HostSeed (op_65) @Host(player2)
op_70 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (receive_11, op_69) @Host(player2)
op_71 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_64, op_70) @Host(player2)
op_72 = DeriveSeed{sync_key = 0dc0e20cef3f6937354f70f092701fb9}: (HostPrfKey) -> HostSeed (op_65) @Host(player2)
op_73 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (receive_11, op_72) @Host(player2)
op_74 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_62, op_73) @Host(player2)
op_75 = Fill{value = Ring128(1)}: (HostShape) -> HostRing128Tensor (op_60) @Host(player0)
op_76 = Shl{amount = 126}: (HostRing128Tensor) -> HostRing128Tensor (op_75) @Host(player0)
op_77 = Shl{amount = 86}: (HostRing128Tensor) -> HostRing128Tensor (op_75) @Host(player0)
op_78 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_59, op_76) @Host(player0)
op_79 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_78, receive_12) @Host(player0)
op_80 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_10, receive_13) @Host(player1)
op_81 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_79, receive_18) @Host(player0)
op_82 = Shl{amount = 1}: (HostRing128Tensor) -> HostRing128Tensor (op_81) @Host(player0)
op_83 = Shr{amount = 41}: (HostRing128Tensor) -> HostRing128Tensor (op_82) @Host(player0)
op_84 = Shr{amount = 127}: (HostRing128Tensor) -> HostRing128Tensor (op_81) @Host(player0)
op_85 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_16, op_84) @Host(player0)
op_86 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_16, op_84) @Host(player0)
op_87 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_17, receive_19) @Host(player1)
op_88 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_85, op_86) @Host(player0)
op_89 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_17, op_87) @Host(player1)
op_90 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_16, op_84) @Host(player0)
op_91 = Mul: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (receive_17, receive_19) @Host(player1)
op_92 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_88, op_90) @Host(player0)
op_93 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_89, op_91) @Host(player1)
op_94 = Shl{amount = 87}: (HostRing128Tensor) -> HostRing128Tensor (op_92) @Host(player0)
op_95 = Shl{amount = 87}: (HostRing128Tensor) -> HostRing128Tensor (op_93) @Host(player1)
op_96 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_83, receive_14) @Host(player0)
op_97 = Neg: (HostRing128Tensor) -> HostRing128Tensor (receive_15) @Host(player1)
op_98 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_96, op_94) @Host(player0)
op_99 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_97, op_95) @Host(player1)
op_100 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_98, op_77) @Host(player0)
op_101 = PrfKeyGen: () -> HostPrfKey () @Host(player2)
op_102 = DeriveSeed{sync_key = 9c79a346a27cc330069620e3d8abfe4d}: (HostPrfKey) -> HostSeed (op_101) @Host(player2)
op_103 = DeriveSeed{sync_key = 80fdc865389219096a9a97c7b79d5719}: (HostPrfKey) -> HostSeed (op_101) @Host(player2)
op_104 = Shape: (HostRing128Tensor) -> HostShape (op_100) @Host(player0)
op_105 = Shape: (HostRing128Tensor) -> HostShape (op_99) @Host(player1)
op_106 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_104, receive_20) @Host(player0)
op_107 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (op_105, receive_21) @Host(player1)
op_108 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (receive_22, op_102) @Host(player2)
op_109 = SampleSeeded{}: (HostShape, HostSeed) -> HostRing128Tensor (receive_22, op_103) @Host(player2)
op_110 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_100, op_106) @Host(player0)
op_111 = Sub: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_99, op_107) @Host(player1)
op_112 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_110, receive_23) @Host(player0)
op_113 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_108, receive_24) @Host(player2)
op_114 = Add: (HostRing128Tensor, HostRing128Tensor) -> HostRing128Tensor (op_113, op_109) @Host(player2)
op_115 = RingFixedpointDecode{scaling_base = 2, scaling_exp = 40}: (HostRing128Tensor) -> HostFloat64Tensor (op_114) @Host(player2)
op_116 = Output{tag = "output_0"}: (HostFloat64Tensor) -> HostFloat64Tensor (op_115) @Host(player2)
send_0 = Send{rendezvous_key = 00000000000000000000000000000000, receiver = "player2"}: (HostPrfKey) -> HostUnit (op_4) @Host(player0)
receive_0 = Receive{rendezvous_key = 00000000000000000000000000000000, sender = "player0"}: () -> HostPrfKey () @Host(player2)
send_1 = Send{rendezvous_key = 01000000000000000000000000000000, receiver = "player0"}: (HostPrfKey) -> HostUnit (op_5) @Host(player1)
receive_1 = Receive{rendezvous_key = 01000000000000000000000000000000, sender = "player1"}: () -> HostPrfKey () @Host(player0)
send_2 = Send{rendezvous_key = 02000000000000000000000000000000, receiver = "player1"}: (HostPrfKey) -> HostUnit (op_6) @Host(player2)
receive_2 = Receive{rendezvous_key = 02000000000000000000000000000000, sender = "player2"}: () -> HostPrfKey () @Host(player1)
send_3 = Send{rendezvous_key = 03000000000000000000000000000000, receiver = "player1"}: (HostShape) -> HostUnit (op_7) @Host(player0)
receive_3 = Receive{rendezvous_key = 03000000000000000000000000000000, sender = "player0"}: () -> HostShape () @Host(player1)
send_4 = Send{rendezvous_key = 04000000000000000000000000000000, receiver = "player2"}: (HostShape) -> HostUnit (op_7) @Host(player0)
receive_4 = Receive{rendezvous_key = 04000000000000000000000000000000, sender = "player0"}: () -> HostShape () @Host(player2)
send_5 = Send{rendezvous_key = 05000000000000000000000000000000, receiver = "player1"}: (HostRing128Tensor) -> HostUnit (op_10) @Host(player0)
receive_5 = Receive{rendezvous_key = 05000000000000000000000000000000, sender = "player0"}: () -> HostRing128Tensor () @Host(player1)
send_6 = Send{rendezvous_key = 06000000000000000000000000000000, receiver = "player2"}: (HostShape) -> HostUnit (op_15) @Host(player1)
receive_6 = Receive{rendezvous_key = 06000000000000000000000000000000, sender = "player1"}: () -> HostShape () @Host(player2)
send_7 = Send{rendezvous_key = 07000000000000000000000000000000, receiver = "player0"}: (HostShape) -> HostUnit (op_15) @Host(player1)
receive_7 = Receive{rendezvous_key = 07000000000000000000000000000000, sender = "player1"}: () -> HostShape () @Host(player0)
send_8 = Send{rendezvous_key = 08000000000000000000000000000000, receiver = "player2"}: (HostRing128Tensor) -> HostUnit (op_18) @Host(player1)
receive_8 = Receive{rendezvous_key = 08000000000000000000000000000000, sender = "player1"}: () -> HostRing128Tensor () @Host(player2)
send_9 = Send{rendezvous_key = 09000000000000000000000000000000, receiver = "player0"}: (HostRing128Tensor) -> HostUnit (op_57) @Host(player1)
receive_9 = Receive{rendezvous_key = 09000000000000000000000000000000, sender = "player1"}: () -> HostRing128Tensor () @Host(player0)
send_10 = Send{rendezvous_key = 0a000000000000000000000000000000, receiver = "player1"}: (HostRing128Tensor) -> HostUnit (op_58) @Host(player2)
receive_10 = Receive{rendezvous_key = 0a000000000000000000000000000000, sender = "player2"}: () -> HostRing128Tensor () @Host(player1)
send_11 = Send{rendezvous_key = 0b000000000000000000000000000000, receiver = "player2"}: (HostShape) -> HostUnit (op_60) @Host(player0)
receive_11 = Receive{rendezvous_key = 0b000000000000000000000000000000, sender = "player0"}: () -> HostShape () @Host(player2)
send_12 = Send{rendezvous_key = 0c000000000000000000000000000000, receiver = "player0"}: (HostRing128Tensor) -> HostUnit (op_67) @Host(player2)
receive_12 = Receive{rendezvous_key = 0c000000000000000000000000000000, sender = "player2"}: () -> HostRing128Tensor () @Host(player0)
send_13 = Send{rendezvous_key = 0d000000000000000000000000000000, receiver = "player1"}: (HostRing128Tensor) -> HostUnit (op_68) @Host(player2)
receive_13 = Receive{rendezvous_key = 0d000000000000000000000000000000, sender = "player2"}: () -> HostRing128Tensor () @Host(player1)
send_14 = Send{rendezvous_key = 0e000000000000000000000000000000, receiver = "player0"}: (HostRing128Tensor) -> HostUnit (op_70) @Host(player2)
receive_14 = Receive{rendezvous_key = 0e000000000000000000000000000000, sender = "player2"}: () -> HostRing128Tensor () @Host(player0)
send_15 = Send{rendezvous_key = 0f000000000000000000000000000000, receiver = "player1"}: (HostRing128Tensor) -> HostUnit (op_71) @Host(player2)
receive_15 = Receive{rendezvous_key = 0f000000000000000000000000000000, sender = "player2"}: () -> HostRing128Tensor () @Host(player1)
send_16 = Send{rendezvous_key = 10000000000000000000000000000000, receiver = "player0"}: (HostRing128Tensor) -> HostUnit (op_73) @Host(player2)
receive_16 = Receive{rendezvous_key = 10000000000000000000000000000000, sender = "player2"}: () -> HostRing128Tensor () @Host(player0)
send_17 = Send{rendezvous_key = 11000000000000000000000000000000, receiver = "player1"}: (HostRing128Tensor) -> HostUnit (op_74) @Host(player2)
receive_17 = Receive{rendezvous_key = 11000000000000000000000000000000, sender = "player2"}: () -> HostRing128Tensor () @Host(player1)
send_18 = Send{rendezvous_key = 12000000000000000000000000000000, receiver = "player0"}: (HostRing128Tensor) -> HostUnit (op_80) @Host(player1)
receive_18 = Receive{rendezvous_key = 12000000000000000000000000000000, sender = "player1"}: () -> HostRing128Tensor () @Host(player0)
send_19 = Send{rendezvous_key = 13000000000000000000000000000000, receiver = "player1"}: (HostRing128Tensor) -> HostUnit (op_84) @Host(player0)
receive_19 = Receive{rendezvous_key = 13000000000000000000000000000000, sender = "player0"}: () -> HostRing128Tensor () @Host(player1)
send_20 = Send{rendezvous_key = 14000000000000000000000000000000, receiver = "player0"}: (HostSeed) -> HostUnit (op_102) @Host(player2)
receive_20 = Receive{rendezvous_key = 14000000000000000000000000000000, sender = "player2"}: () -> HostSeed () @Host(player0)
send_21 = Send{rendezvous_key = 15000000000000000000000000000000, receiver = "player1"}: (HostSeed) -> HostUnit (op_103) @Host(player2)
receive_21 = Receive{rendezvous_key = 15000000000000000000000000000000, sender = "player2"}: () -> HostSeed () @Host(player1)
send_22 = Send{rendezvous_key = 16000000000000000000000000000000, receiver = "player2"}: (HostShape) -> HostUnit (op_104) @Host(player0)
receive_22 = Receive{rendezvous_key = 16000000000000000000000000000000, sender = "player0"}: () -> HostShape () @Host(player2)
send_23 = Send{rendezvous_key = 17000000000000000000000000000000, receiver = "player0"}: (HostRing128Tensor) -> HostUnit (op_111) @Host(player1)
receive_23 = Receive{rendezvous_key = 17000000000000000000000000000000, sender = "player1"}: () -> HostRing128Tensor () @Host(player0)
send_24 = Send{rendezvous_key = 18000000000000000000000000000000, receiver = "player2"}: (HostRing128Tensor) -> HostUnit (op_112) @Host(player0)
receive_24 = Receive{rendezvous_key = 18000000000000000000000000000000, sender = "player0"}: () -> HostRing128Tensor () @Host(player2)

Examining the output, we can see that our operations op_7 and op_12 have been updated to allow for sending the shape data from player0 to player2:

op_7 = Shape: (HostRing128Tensor) -> HostShape (op_1) @Host(player0)
...
op_12 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (receive_4) @Host(player2)
...
send_4 = Send{rendezvous_key = 04000000000000000000000000000000, receiver = "player2"}: (HostShape) -> HostUnit (op_7) @Host(player0)
receive_4 = Receive{rendezvous_key = 04000000000000000000000000000000, sender = "player0"}: () -> HostShape () @Host(player2)

It’s easier to see what’s happening by reordering these ops to respect input-output relations:

op_7 = Shape: (HostRing128Tensor) -> HostShape (op_1) @Host(player0)
send_4 = Send{rendezvous_key = 04000000000000000000000000000000, receiver = "player2"}: (HostShape) -> HostUnit (op_7) @Host(player0)
receive_4 = Receive{rendezvous_key = 04000000000000000000000000000000, sender = "player0"}: () -> HostShape () @Host(player2)
op_12 = Fill{value = Ring128(0)}: (HostShape) -> HostRing128Tensor (receive_4) @Host(player2)

The result of adding the networking pass to compilation is that our line for op_12 has been modified to take its input from a receive_4 @ Host(player2) operation. This operation shares a rendezvous key with send_4 @ Host(player0), which takes op_7 as an input. Thus the computation now instructs that the output of op_7 should be sent from player0 to player2, and used as the input to op_12.

Common compilation passes#

The following is a sane default list of compiler passes to compile arbitrary PyMoose computations into their runtime-ready versions:

!elk compile dotprod.moose -p typing,lowering,prune,networking,toposort -o dotprod-networked.moose
  • The typing pass looks for Values of UnknownType in a computation and attempts to fill in type information using a simple one-hop inference rule (e.g. use the next operation’s signature to fill in the output type). It’s only necessary if you were lazy with how you specified output vtypes of a few kinds of operations in your PyMoose computation.

  • The prune pass looks for all subgraphs that aren’t connected to an Output operation, and removes those subgraphs from the graph.

  • The toposort pass reorders the resulting textual form of the computation so that the directed graph it represents is in topological order, i.e. any operation that takes a set of inputs comes after those inputs. Note this means that you cannot generally treat the textual form of a computation as unique.

Other compilation passes#

WellFormed#

If you only want to a textual computation for correctness without actually returning its compiled form, the wellformed pass can be faster than performing the full lowering.

Print#

The print pass allows you to convert your .moose file into .dot files for GraphViz rendering:

!elk compile dotprod.moose -p print -o dotprod.moose
digraph {
    0 [ label = "constant_0 = Constant\l@Host(player0)" shape = rectangle color = "#336699"]
    1 [ label = "constant_1 = Constant\l@Host(player1)" shape = rectangle color = "#ff0000"]
    2 [ label = "cast_1 = Cast\l@Host(player1)" shape = rectangle color = "#ff0000"]
    3 [ label = "cast_0 = Cast\l@Host(player0)" shape = rectangle color = "#336699"]
    4 [ label = "dot_0 = Dot\l@Replicated(player0, player1, player2)" shape = rectangle color = "#ff6600"]
    5 [ label = "cast_2 = Cast\l@Host(player2)" shape = rectangle color = "#92cd00"]
    6 [ label = "output_0 = Output\l@Host(player2)" shape = house color = "#92cd00"]
    1 -> 2 [ ]
    0 -> 3 [ ]
    3 -> 4 [ ]
    2 -> 4 [ ]
    4 -> 5 [ ]
    5 -> 6 [ ]
}

Using GraphvizOnline, we can render this output as png:

The pass can be mixed and matched with other passes like lowering, prune, and networking, although in general compiled/networked computation graphs are often too large to be rendered this way in a useful manner. Here is the compiled version of the computation above without any networking ops:

%%capture
!elk compile dotprod.moose -p lowering,prune,toposort,print -o dotprod-compiled.moose

Rendered with GraphViz Online:

And finally, the same computation with networking ops added:

%%capture
!elk compile dotprod.moose -p lowering,prune,networking,toposort,print -o dotprod-networked.moose

Rendered with GraphvizOnline:

Evaluating the textual form against a Moose runtime#

We can use the dasher binary from the Moose command line tools to locally evaluate the networked computation from above in a simulated setting:

!dasher dotprod-networked.moose
Roles found: ["player0", "player2", "player1"]
Output 'op_116' ready:
Ok(HostFloat64Tensor(HostTensor([[32.0]], shape=[1, 1], strides=[1, 1], layout=CFcf (0xf), dynamic ndim=2, HostPlacement { owner: Role("player2") })))

Although we often refer to as “the Moose runtime”, Moose is actually a framework that can be used to build your own secure computation runtime. The three components of a runtime implementation include storage, networking, and choreography.

  • Storage is the mechanism by which Save and Load operations are implemented.

  • Networking is the mechanism by which Send and Receive operations are implemented. This determines how values get communicated from one host to another at runtime.

  • Choreography is the mechanism by which computations are actually launched and executed against a set of Moose workers/executors. This covers several important aspects of execution, including:

    • How each worker gets a copy of the computation

    • How each worker provides input values as arguments to the computation

    • How each worker reports (or doesn’t report) its computation outputs

The Moose command line tools ship with several canonical implementations of these packaged into binaries that we call “reindeer”.

  1. dasher is a reindeer used for simulation, somewhat similar to PyMoose’s LocalMooseRuntime.

  2. rudolph is a reindeer that uses filesystem choreography, in-memory storage, and gRPC networking.

  3. comet is similar to rudolph but uses a gRPC client cometctl for choreography. This is similar to PyMoose’s GrpcMooseRuntime.

  4. vixen is a naive, single-worker implementation that doesn’t implement any choreography at all. It should not be used, and is only there for legacy/educational reasons.