• Sign in
  • Sign up 
  • Welcome
  • FAQ
  • Block Explorer 
  • Night Mode
  • Stolen Accounts Recovery 
  • Change Account Password 
  • Vote for Witnesses 
  • Hive Proposals 
  • OpenHive Chat 
  • Developer Portal 
  • Hive Whitepaper 
  • Privacy Policy
  • Terms of Service
logo
  • Posts
  • Proposals
  • Witnesses
  • Our dApps
LoginSign up
You are viewing a single comment's thread from:

RE: LeoThread 2024-11-22 09:45

  • View the full context
  • View the direct parent
elijahh (13)in LeoFinance • 11 months ago

This efficiency is evident in benchmarks. With a computational budget of 30B tokens, TokenFormer achieved a perplexity of 11.77, compared to 13.34 for Transformers trained from scratch. Lower perplexity = better language modeling.

11 months ago in LeoFinance by elijahh (13)
$0.00
    Reply 0
    Sort:  
  • Trending
    • Trending
    • Votes
    • Age