🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
Fully Homomorphic Encryption FHE: A New Fortress for AI Security
AI Security issues are becoming increasingly prominent, and fully homomorphic encryption has become the solution.
Recently, an AI model named Manus achieved breakthrough results in the GAIA benchmark test, outperforming large language models of the same level. Manus demonstrated the ability to independently complete complex tasks, such as multinational business negotiations, which involve multiple aspects like contract analysis, strategic planning, and program formulation. Compared to traditional systems, Manus has significant advantages in dynamic goal decomposition, cross-modal reasoning, and memory-enhanced learning. It can break down large tasks into hundreds of executable subtasks, handle various types of data simultaneously, and continuously improve decision-making efficiency and reduce error rates through reinforcement learning.
The emergence of Manus has once again sparked discussions in the industry regarding the development path of AI: is the future heading towards Artificial General Intelligence (AGI) or Multi-Agent Systems (MAS)? This actually reflects the core contradiction of how to balance efficiency and security in AI development. The closer a single intelligence gets to AGI, the less transparent its decision-making process becomes, and the risks increase accordingly; while multi-agent collaboration can disperse risks, it may miss critical decision-making opportunities due to communication delays.
As AI systems become increasingly intelligent, their potential security risks are also expanding. For example, in medical scenarios, AI needs access to sensitive genetic data of patients; in financial negotiations, there may be undisclosed corporate financial information involved. Additionally, AI systems may also have algorithmic biases, such as making unfair salary suggestions for specific groups during the recruitment process. More seriously, AI systems may become targets of hacker attacks, where specific signals are implanted to influence their judgment.
To address these challenges, various security strategies have been proposed in the industry, among which fully homomorphic encryption (FHE) is considered a key technology for solving security issues in the AI era. FHE allows computation on data in an encrypted state, which means that even the AI system itself cannot decrypt the original data.
At the data level, FHE ensures that all user input information (including biometric features, voice, etc.) is processed in an encrypted state, effectively preventing information leakage. At the algorithm level, FHE implements "encryption model training," making it impossible for developers to peek into the AI's decision-making process. In terms of multi-agent collaboration, threshold encryption technology can ensure that even if a single node is compromised, it will not lead to global data leakage.
Although Web3 security technology may not have a direct connection to the average user, it has a profound impact on everyone. In this challenging digital world, without actively taking security measures, users will find it difficult to protect their rights.
Currently, several projects are exploring the field of Web3 security. For example, a certain project has pioneered the implementation of FHE technology on the mainnet and has collaborated with several well-known technology companies. However, security projects often do not attract the attention of speculators, and whether they can break this situation and become leaders in the security field remains to be seen.
As AI technology approaches human intelligence levels, non-traditional defense systems are becoming increasingly important. Fully homomorphic encryption not only addresses current security issues but also lays the foundation for the future era of strong AI. On the road to AGI, fully homomorphic encryption is no longer an option but a necessary condition to ensure the safe development of AI.