This paper proposes a learnable tokenizer named LETTER for generative recommendation, which integrates hierarchical semantics, collaborative signals, and code assignment diversity to improve item tokenization. Existing methods such as ID, textual, and codebook-based identifiers have limitations in encoding semantic information, incorporating collaborative signals, or handling code assignment bias. LETTER addresses these issues by incorporating Residual Quantized VAE for semantic regularization, a contrastive alignment loss for collaborative regularization, and a diversity loss to mitigate code assignment bias. The method is instantiated on two models and a ranking-guided generation loss is proposed to enhance their ranking ability. Experiments on three datasets validate the superiority of LETTER, showing that it outperforms existing item tokenization methods for generative recommendation. LETTER achieves superior item tokenization by simultaneously considering hierarchical semantics, collaborative signals, and code assignment diversity. The method is effective in reducing code assignment bias and improving the diversity of code embeddings. The results show that LETTER significantly improves the performance of generative recommender models, particularly in terms of ranking ability. The paper also discusses the impact of various hyperparameters on the performance of LETTER, including identifier length, codebook size, strength of collaborative and diversity regularization, and temperature. The study highlights the importance of item tokenization in generative recommendation and suggests promising directions for future research, such as tokenization with rich user behaviors and cross-domain item tokenization for open-ended recommendation.This paper proposes a learnable tokenizer named LETTER for generative recommendation, which integrates hierarchical semantics, collaborative signals, and code assignment diversity to improve item tokenization. Existing methods such as ID, textual, and codebook-based identifiers have limitations in encoding semantic information, incorporating collaborative signals, or handling code assignment bias. LETTER addresses these issues by incorporating Residual Quantized VAE for semantic regularization, a contrastive alignment loss for collaborative regularization, and a diversity loss to mitigate code assignment bias. The method is instantiated on two models and a ranking-guided generation loss is proposed to enhance their ranking ability. Experiments on three datasets validate the superiority of LETTER, showing that it outperforms existing item tokenization methods for generative recommendation. LETTER achieves superior item tokenization by simultaneously considering hierarchical semantics, collaborative signals, and code assignment diversity. The method is effective in reducing code assignment bias and improving the diversity of code embeddings. The results show that LETTER significantly improves the performance of generative recommender models, particularly in terms of ranking ability. The paper also discusses the impact of various hyperparameters on the performance of LETTER, including identifier length, codebook size, strength of collaborative and diversity regularization, and temperature. The study highlights the importance of item tokenization in generative recommendation and suggests promising directions for future research, such as tokenization with rich user behaviors and cross-domain item tokenization for open-ended recommendation.