ZTE debuts its AI Booster Intelligent Computing Platform at MWC Shanghai 2024
Release Time:2024-06-26
  • Share:
  • ZTE showcased its AI Booster Intelligent Computing Platform at MWC Shanghai 2024
  • ZTE's AI Booster Intelligent Computing Platform shields bottom-layer implementation and complicated details for users through the visual tool chain, reducing the technical threshold of large model training and inference

Shanghai, China, 26 June 2024 - ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions has unveiled its vision for telecom network cloud-native architecture and AI standardization at MWC Shanghai 2024. Among its key showcases is the AI Booster Intelligent Computing Platform, which adopts a wizard tool chain to streamline the entire process of training, fine-tuning and performance optimization for large models. This platform significantly enhances the development efficiency of large-scale model and AI applications through its tool-based engineering capabilities.

The introduction of large-model technologies to enhance production and management efficiency has become increasingly expected across industries. Therefore, beyond general large-scale models, the adoption of professional field data for fine-tuned optimization and the construction of industry-specific large models with both broad and specialized capabilities have become imperative. The training and fine-tuning process of large models is complex, involving base model selection, data preparation, training policy formulation, hyper-parameter adjustment, and model performance optimization. A user-friendly tool platform is crucial. The ZTE AI Booster Intelligent Computing Platform shields users from the complexities of bottom-layer implementation and intricate details through its visual tool chain, thereby reducing the technical barriers to large model training and inference.

The AI Booster Intelligent Computing Platform comprises the AI resource management platform and AI training & inference platform. The AI resource management platform supports unified management, orchestration, and scheduling of heterogeneous computing power, along with resource pooling and virtualization to maximize resource utilization. The AI training inference platform guides the entire process of large model training & inference through the tool chain, offering wizard guidance, and visualizing the process. Users are relieved from the need to monitor intelligent computing resources directly. The tool chain automates parallel training, evaluation/optimization, and one-click deployment of large models, thereby alleviating the technical and engineering challenges of large-scale model training and inference. It also enables rapid development of dedicated large models tailored for various sectors, including O&M large models, government-affair large models, and campus large models.

Moving forward, ZTE will continue to explore the field of intelligent computing, deepen ecological cooperation, and refine application practices. These efforts aim to accelerate the intelligent transformation and upgrade across various industries.

ABOUT ZTE:

ZTE helps to connect the world with continuous innovation for a better future. The company provides innovative technologies and integrated solutions, its portfolio spans all series of wireless, wireline, devices and professional telecommunications services. Serving over a quarter of the global population, ZTE is dedicated to creating a digital and intelligent ecosystem, and enabling connectivity and trust everywhere. ZTE is listed on both the Hong Kong and Shenzhen Stock Exchanges. www.zte.com.cn/global

FOLLOW US:

Facebook www.facebook.com/ZTECorp

X www.twitter.com/ZTEPress

LinkedIn www.linkedin.com/company/zte

YouTube www.youtube.com/@ZTECorporation

MEDIA INQUIRIES:

ZTE Corporation

Communications

Email: ZTE.press.release@zte.com.cn

Attachments

  • Original Link
  • Permalink

Disclaimer

ZTE Corporation published this content on 25 June 2024 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 26 June 2024 08:47:33 UTC.