algorithm - Optimize (parallelize) execution of my "virtual machine" model -
algorithm - Optimize (parallelize) execution of my "virtual machine" model -
suppose have simple "virtual machine" model keeps queue of instructions in 1 of next formats:
unary_opcode, in_address, out_address binary_opcode, in1_address, in2_address, out_address where addresses integers array of memory units.
is there well-known algorithm analyzes sequence of instructions , tries parallelize them much possible without:
producing different result sequential execution. data race.
if instruction before instruction b in list, , ((b reads 1 of a's writes) or (b , have same write address) or (b writes 1 of a's reads)), add together directed border b a. note graph dag, since instructions in specific order.
now calculate "layer" of instruction as: instructions no outgoing edges layer 1. instructions outgoing edges layer n , below layer n+1. obviously, if instruction has outgoing border instruction layer isn't known yet, don't assign yet! there simple recursive routine assigning layers works dfs each instruction.
now instruction layer can run on cycle (but no sooner), , optimal.
it possible improve if parallel machine capable of quenching writes before instructions (the write-write conflict edges can removed in case). exactly mean is, when given batch of instructions executed in parallel, if of them have same write address, machine deterministically writes result of -last- instruction in batch write address (rather having unspecified behavior, or nondeterministically writing result of -some- instruction in batch wrote address).
algorithm optimization parallel-processing
Comments
Post a Comment