OneChart is a novel method for chart structural extraction that introduces an auxiliary token to enhance numerical accuracy and reliability. The method uses an autoregressive main body and an additional decoder to improve the model's ability to extract structured information from charts. The auxiliary token is placed at the beginning of the token sequence and helps the model capture enhanced numerical features through causal attention. Additionally, a self-evaluation mechanism is introduced to assess the reliability of the model's chart parsing results by providing confidence scores for the generated content. OneChart outperforms existing state-of-the-art chart parsing models in terms of average precision (AP) across multiple public benchmarks, despite having only 0.2 billion parameters. It also brings 10%+ accuracy gains for the popular LVLM (LLaVA-1.6) in the downstream ChartQA benchmark. The method is evaluated on various chart types and languages, and the results show that OneChart achieves SOTA performance in structural extraction. The model is trained using a combination of synthetic and real chart data, and the training process includes pretraining, warmup auxiliary number decoder, and supervised fine-tuning. The model's inference process includes a self-consistency distance check to determine the reliability of the raw predictions. The results show that OneChart significantly improves the accuracy of chart parsing and reasoning tasks, demonstrating its effectiveness in structured parsing and self-evaluation. The method is also evaluated on various downstream tasks, including QA, and shows significant improvements over existing methods. The model's performance is further enhanced by integrating it with popular VLMs, leading to improved accuracy in chart-related QA tasks. The results highlight the effectiveness of OneChart in chart parsing and reasoning, and its potential for future research and applications.OneChart is a novel method for chart structural extraction that introduces an auxiliary token to enhance numerical accuracy and reliability. The method uses an autoregressive main body and an additional decoder to improve the model's ability to extract structured information from charts. The auxiliary token is placed at the beginning of the token sequence and helps the model capture enhanced numerical features through causal attention. Additionally, a self-evaluation mechanism is introduced to assess the reliability of the model's chart parsing results by providing confidence scores for the generated content. OneChart outperforms existing state-of-the-art chart parsing models in terms of average precision (AP) across multiple public benchmarks, despite having only 0.2 billion parameters. It also brings 10%+ accuracy gains for the popular LVLM (LLaVA-1.6) in the downstream ChartQA benchmark. The method is evaluated on various chart types and languages, and the results show that OneChart achieves SOTA performance in structural extraction. The model is trained using a combination of synthetic and real chart data, and the training process includes pretraining, warmup auxiliary number decoder, and supervised fine-tuning. The model's inference process includes a self-consistency distance check to determine the reliability of the raw predictions. The results show that OneChart significantly improves the accuracy of chart parsing and reasoning tasks, demonstrating its effectiveness in structured parsing and self-evaluation. The method is also evaluated on various downstream tasks, including QA, and shows significant improvements over existing methods. The model's performance is further enhanced by integrating it with popular VLMs, leading to improved accuracy in chart-related QA tasks. The results highlight the effectiveness of OneChart in chart parsing and reasoning, and its potential for future research and applications.