📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
British media: OpenAI has shortened the safety testing time for AI models.
Jin10 data reported on April 11, according to the Financial Times, OpenAI has significantly reduced the time and resources used to test the safety of its powerful artificial intelligence models, raising concerns about the hasty launch of its technology without sufficient safeguards. Compared to a few months ago, staff and third-party teams have recently had only a few days to "evaluate" OpenAI's latest large language model. According to eight people familiar with OpenAI's testing process, the startup's testing has become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks, as this $300 billion startup faces pressure to rapidly release new models and maintain its competitive edge. Insider sources revealed that OpenAI has been striving to release its new model o3 as early as next week, giving some testers less than a week for safety checks. Previously, OpenAI allowed months for safety testing.