A new study by the òòò½´«Ã½ (òòò½´«Ã½) found that the Department of Education’s (DepEd) long-standing practice of marking students “proficient” only if they score at least 75% in national assessments may not accurately reflect what learners actually know and can do.

The paper, “Examining the DepEd’s National Assessments: A Review of Framework, Design, Development, Psychometric Properties, and Utilization,” assessed how national tests are developed and used, and underscored the need to strengthen test quality and ensure closer alignment with curriculum expectations.

According to the study, DepEd’s fixed 75% benchmark for the National Achievement Test (NAT) is not based on standard-setting processes that determine cutoffs based on curriculum requirements.

“Generally, there are more students reaching the proficient level when using the standard setting cut-offs than the Bureau of Education Assessment (BEA) cut-offs,” the authors reported, suggesting that the current bar may be set too high and may not represent actual learner performance.

Many students who demonstrate the expected skills are still categorized as “nearly proficient” or “low proficient,” highlighting the need for a more evidence-based approach to defining proficiency.

Teachers, school heads, and division testing coordinators interviewed for the study pointed to the need for better alignment between national assessments and classroom instruction.

They noted that system-level tests often emphasize broad 21st-century skills—such as problem-solving and critical thinking—but these skills are difficult to assess properly without clear training, well-developed test items, and a shared understanding of what they look like in practice.

Teachers added that they find data linked to specific learning competencies more useful in improving instruction and recommended that national assessments provide more detailed information to help them better understand student learning.

The study’s analysis further highlighted the need for stronger test development and item validation.

Some test items were found to be too easy, too difficult, or not discriminating enough, underscoring the importance of rigorous quality control in item writing, review, and selection.

“Ideally, system and classroom assessments should be aligned, and if ever there is misalignment, these should be intentional, not unintended outcomes,” the authors stressed

Stakeholders likewise raised concerns about delays in releasing test results and the lack of clear, skill-based proficiency descriptions.

Teachers said timely and skill-focused reports would help them support student progression more effectively. At the same time, more precise descriptions would prevent parents and learners from misinterpreting scores as signs of poor performance.

A more transparent and user-friendly reporting system would allow schools to track progress, identify gaps, and collaborate more effectively on instructional improvements.

As DepEd continues implementing the MATATAG Curriculum and reviews its national assessment policy, the study highlights the need to strengthen every stage of the assessment cycle—from framework development to test design, psychometric analysis, reporting, and utilization.

Strengthening the BEA’s role, whether as an attached agency with safeguards for objectivity or as an independent assessment body, can help ensure continuity, transparency, and fairness in reporting learning outcomes.

“These improvements in the assessment system in basic education would enable better tracking of learner progress, inform educational reforms, and ultimately elevate the quality of education in the Philippines,” the authors concluded.

Read the full study at . ### — MJCG/MAEC



Main Menu

Secondary Menu