EventDance is a novel framework for unsupervised source-free cross-modal adaptation in event-based object recognition, enabling adaptation from image to event modalities without access to labeled source images. The framework addresses the challenge of bridging the modality gap between images and events, which are fundamentally different in their data representation and temporal characteristics. The key components of EventDance include a reconstruction-based modality bridging (RMB) module and a multi-representation knowledge adaptation (MKA) module. The RMB module reconstructs intensity frames from event streams in a self-supervised manner, creating surrogate images that mimic the source image distribution. This allows for knowledge extraction from the source model. The MKA module then transfers this knowledge to target models that learn from unlabeled event data, using multiple event representations to fully exploit the spatiotemporal information of events. The two modules are mutually updated to achieve optimal performance. Experiments on three benchmark datasets show that EventDance performs well in cross-modal adaptation tasks and outperforms prior source-free domain adaptation methods. The framework is flexible and can be extended to various adaptation settings, demonstrating its effectiveness and superiority in handling the challenging cross-modal adaptation problem.EventDance is a novel framework for unsupervised source-free cross-modal adaptation in event-based object recognition, enabling adaptation from image to event modalities without access to labeled source images. The framework addresses the challenge of bridging the modality gap between images and events, which are fundamentally different in their data representation and temporal characteristics. The key components of EventDance include a reconstruction-based modality bridging (RMB) module and a multi-representation knowledge adaptation (MKA) module. The RMB module reconstructs intensity frames from event streams in a self-supervised manner, creating surrogate images that mimic the source image distribution. This allows for knowledge extraction from the source model. The MKA module then transfers this knowledge to target models that learn from unlabeled event data, using multiple event representations to fully exploit the spatiotemporal information of events. The two modules are mutually updated to achieve optimal performance. Experiments on three benchmark datasets show that EventDance performs well in cross-modal adaptation tasks and outperforms prior source-free domain adaptation methods. The framework is flexible and can be extended to various adaptation settings, demonstrating its effectiveness and superiority in handling the challenging cross-modal adaptation problem.