CABINET is a framework designed to improve the performance of Large Language Models (LLMs) in answering questions over tables by focusing on relevant tabular data and suppressing extraneous information. The framework includes an Unsupervised Relevance Scorer (URS), which assigns relevance scores to table content based on its relevance to the input question. This relevance score is then used to weigh the content passed to the question-answering LLM (QA LLM), allowing it to focus more on the relevant content. CABINET also employs a weakly supervised module that generates a parsing statement describing the criteria for rows and columns relevant to the question and highlights the content of corresponding table cells. CABINET significantly outperforms various tabular LLM baselines and GPT3-based in-context learning methods, is more robust to noise, maintains outperformance on tables of varying sizes, and establishes new state-of-the-art performance on WikiTQ, FeTaQA, and WikiSQL datasets. The framework is trained in an end-to-end manner through cross-entropy loss between the generated and ground-truth answer tokens, and it also incorporates clustering losses and sparsification losses to improve performance. CABINET is shown to be more robust to noise and structural biases in tables, and its performance gains are even more pronounced for larger tables. The framework is evaluated on three commonly used datasets and shows significant improvements in accuracy and S-BLEU scores compared to baselines. The results demonstrate that CABINET is effective in identifying relevant content and making the QA LLM relatively robust to table size.CABINET is a framework designed to improve the performance of Large Language Models (LLMs) in answering questions over tables by focusing on relevant tabular data and suppressing extraneous information. The framework includes an Unsupervised Relevance Scorer (URS), which assigns relevance scores to table content based on its relevance to the input question. This relevance score is then used to weigh the content passed to the question-answering LLM (QA LLM), allowing it to focus more on the relevant content. CABINET also employs a weakly supervised module that generates a parsing statement describing the criteria for rows and columns relevant to the question and highlights the content of corresponding table cells. CABINET significantly outperforms various tabular LLM baselines and GPT3-based in-context learning methods, is more robust to noise, maintains outperformance on tables of varying sizes, and establishes new state-of-the-art performance on WikiTQ, FeTaQA, and WikiSQL datasets. The framework is trained in an end-to-end manner through cross-entropy loss between the generated and ground-truth answer tokens, and it also incorporates clustering losses and sparsification losses to improve performance. CABINET is shown to be more robust to noise and structural biases in tables, and its performance gains are even more pronounced for larger tables. The framework is evaluated on three commonly used datasets and shows significant improvements in accuracy and S-BLEU scores compared to baselines. The results demonstrate that CABINET is effective in identifying relevant content and making the QA LLM relatively robust to table size.