Part 2/8:
One of the most striking features of M1 is its vast context window, boasting up to 1 million input tokens and the ability to generate responses of up to 80,000 tokens. To put this in perspective, consolidating all the Harry Potter books into a single prompt would still keep M1 well within its token limits. This enables the model to maintain an entire series of text in working memory while producing coherent and contextually relevant output.