This paper investigates the extent to which users disclose personally identifiable information (PII) and sensitive topics in human-LLM conversations, using the WildChat dataset, which contains one million user-GPT interactions. The study finds that PII appears in unexpected contexts, such as translation and code editing, and that PII detection alone is insufficient to capture sensitive topics like sexual preferences or drug use. The research develops a taxonomy of tasks and sensitive topics based on qualitative and quantitative analysis of naturally occurring conversations. It also identifies that over 70% of queries contain some form of PII, with 15% mentioning non-PII sensitive topics. The study highlights the risks associated with chatbot interactions, including data leakage and privacy concerns. The findings suggest that users often share highly sensitive information, and that chatbot designers should implement nudging mechanisms to help users moderate their interactions. The paper also discusses the limitations of current PII detection systems and the need for improved methods to detect and contextualize sensitive topics. The study calls for increased attention to privacy and security in chatbot interactions, and for further research into local, private models and the risks associated with human-LLM conversations. The research contributes to the understanding of privacy risks in chatbot interactions and highlights the importance of designing chatbots that protect user privacy.This paper investigates the extent to which users disclose personally identifiable information (PII) and sensitive topics in human-LLM conversations, using the WildChat dataset, which contains one million user-GPT interactions. The study finds that PII appears in unexpected contexts, such as translation and code editing, and that PII detection alone is insufficient to capture sensitive topics like sexual preferences or drug use. The research develops a taxonomy of tasks and sensitive topics based on qualitative and quantitative analysis of naturally occurring conversations. It also identifies that over 70% of queries contain some form of PII, with 15% mentioning non-PII sensitive topics. The study highlights the risks associated with chatbot interactions, including data leakage and privacy concerns. The findings suggest that users often share highly sensitive information, and that chatbot designers should implement nudging mechanisms to help users moderate their interactions. The paper also discusses the limitations of current PII detection systems and the need for improved methods to detect and contextualize sensitive topics. The study calls for increased attention to privacy and security in chatbot interactions, and for further research into local, private models and the risks associated with human-LLM conversations. The research contributes to the understanding of privacy risks in chatbot interactions and highlights the importance of designing chatbots that protect user privacy.