SciAssess is a benchmark designed to evaluate the proficiency of Large Language Models (LLMs) in scientific literature analysis. It focuses on three levels of ability: Memorization (L1), Comprehension (L2), and Analysis & Reasoning (L3). The benchmark includes tasks from diverse scientific fields such as fundamental science, alloy materials, biomedicine, drug discovery, and organic materials. SciAssess encompasses 29 tasks across five sub-domains, with 14,721 questions. The benchmark ensures accuracy, anonymization, and compliance with copyright standards through rigorous quality control measures. It evaluates 11 LLMs, including GPT, Claude, and Gemini, highlighting their strengths and areas for improvement. SciAssess provides insights into LLM performance in scientific literature analysis, supporting the development of more effective LLM applications in this domain. The benchmark includes tasks involving text, charts, chemical reactions, molecular structures, and tables, ensuring a comprehensive evaluation of LLM capabilities. SciAssess aims to identify both the strengths and weaknesses of LLMs in scientific literature analysis, promoting their evolution and enhancing their ability to assist in scientific research. The benchmark is available at https://sci-assess.github.io/.SciAssess is a benchmark designed to evaluate the proficiency of Large Language Models (LLMs) in scientific literature analysis. It focuses on three levels of ability: Memorization (L1), Comprehension (L2), and Analysis & Reasoning (L3). The benchmark includes tasks from diverse scientific fields such as fundamental science, alloy materials, biomedicine, drug discovery, and organic materials. SciAssess encompasses 29 tasks across five sub-domains, with 14,721 questions. The benchmark ensures accuracy, anonymization, and compliance with copyright standards through rigorous quality control measures. It evaluates 11 LLMs, including GPT, Claude, and Gemini, highlighting their strengths and areas for improvement. SciAssess provides insights into LLM performance in scientific literature analysis, supporting the development of more effective LLM applications in this domain. The benchmark includes tasks involving text, charts, chemical reactions, molecular structures, and tables, ensuring a comprehensive evaluation of LLM capabilities. SciAssess aims to identify both the strengths and weaknesses of LLMs in scientific literature analysis, promoting their evolution and enhancing their ability to assist in scientific research. The benchmark is available at https://sci-assess.github.io/.