hadoop - "single point of failure" job tracker node goes down and Map jobs are either running or writing the output -
hadoop - "single point of failure" job tracker node goes down and Map jobs are either running or writing the output -
i'm new hadoop , know happens when "single point of failure" job tracker node goes downwards , map jobs either running or writing output. jobtracker starts mapjobs on 1 time again ?
job tracker single point of failure meaning if goes downwards wont able submit additional map/reduce jobs , existing jobs killed.
when restart job tracker, need resubmit whole job again.
hadoop mapreduce
Comments
Post a Comment