This paper introduces Bi-ACT, a novel approach combining bilateral control and action chunking with transformer for robotic imitation learning. The method integrates bilateral control principles with imitation learning strategies to create a more robust and efficient control system for autonomous robotic arms. Bi-ACT uses joint angles, joint velocities, torque, and images as inputs, and produces joint angles, joint velocities, and torque as outputs. The model is designed to predict the subsequent steps for the leader robot's joint angles, angular velocities, and forces, enabling more nuanced and responsive maneuvering. The model is trained using data collected via bilateral control, which allows for the collection of both position and force information. This enables the robot to adapt to the hardness and weight of objects, which was not possible with only position control. The effectiveness of Bi-ACT has been validated through extensive real-world experiments involving pick-and-place and put-in-drawer tasks with objects of varying hardness. The results show that Bi-ACT achieves high accuracy and success rates in these tasks, demonstrating its proficiency in adapting to new environments. The model's ability to handle both position and force information significantly enhances its effectiveness in imitation learning. The paper also discusses the experimental setup, training dataset, and results of the proposed method, showing that Bi-ACT outperforms other methods in handling diverse datasets and complex tasks. The future work includes improving the robustness and adaptability of Bi-ACT, integrating multimodal sensory inputs, and generalizing the method across diverse robotic platforms.This paper introduces Bi-ACT, a novel approach combining bilateral control and action chunking with transformer for robotic imitation learning. The method integrates bilateral control principles with imitation learning strategies to create a more robust and efficient control system for autonomous robotic arms. Bi-ACT uses joint angles, joint velocities, torque, and images as inputs, and produces joint angles, joint velocities, and torque as outputs. The model is designed to predict the subsequent steps for the leader robot's joint angles, angular velocities, and forces, enabling more nuanced and responsive maneuvering. The model is trained using data collected via bilateral control, which allows for the collection of both position and force information. This enables the robot to adapt to the hardness and weight of objects, which was not possible with only position control. The effectiveness of Bi-ACT has been validated through extensive real-world experiments involving pick-and-place and put-in-drawer tasks with objects of varying hardness. The results show that Bi-ACT achieves high accuracy and success rates in these tasks, demonstrating its proficiency in adapting to new environments. The model's ability to handle both position and force information significantly enhances its effectiveness in imitation learning. The paper also discusses the experimental setup, training dataset, and results of the proposed method, showing that Bi-ACT outperforms other methods in handling diverse datasets and complex tasks. The future work includes improving the robustness and adaptability of Bi-ACT, integrating multimodal sensory inputs, and generalizing the method across diverse robotic platforms.