Let me line up the crucial differences between each of them:
Hadoop's MapReduce program is basically written using Java Programming Language.
While MongoDB's MapReduce Program is written using Java Scripting Language.
Hadoop's MapReduce is programmed to use all available cores of a processor.
On the other hand, MongoDB works with a single thread
Hadoop MapReduce will not be collected along with the data.
But the MangoDB's MapReduce will be collected along with the data.
Hadoop MapReduce is designed in such a way that it can have millions of engines/hours and support numerous corner cases with mammoth sized outputs when compared to MongoDB's MapReduce.
Hadoop's MapReduce can is comprised of higher level frameworks like Apache-Pig, Apache-Hive and many more.
End of the, one can conclude to land upon MongoDB's MapReduce for shorter codes and smaller sized data processing jobs and when it comes to complex problem solving and a processing tremendous sized data, I would feel having Hadoop's MapReduce is like being on the safer side.
the bottom line is, we need to take Java in the account as its the programming language which could support multiple libraries for statistical data analysis.
Hope this will help!
To know more about it, get your Mongodb certification today.
Thanks.