Exams are a vital aspect of the educational system, which has certain goals. Exams are beneficial because they assess a student’s progress toward defined goals. An examination is the process of evaluating the ability or accomplishments of a student in any academic programme subject. Also, there are elements that make it difficult to assess the student’s true achievement. This research aims to determine the elements that influence university students’ examination performance.
Research Overview:
The questionnaire served as a study instrument. The questionnaire was sent to 200 students at Bahauddin Zakariya University, Multan, Pakistan; 100 from the Faculty of Arts and 100 from the Faculty of Science.
The mean score was determined to establish the empirical response of students to each question. Using standard deviation and the Z test, data about each gender and each faculty were compared. The results for each statement were determined. The 2006 study states that students with brief tests are highly talented.
A conclusion was formed based on the results.
It was discovered that the respondents believed that the majority of psychological, physical, socio-economic, and educational factors affected their performance in examinations at the university level;
(ii) changes in the pattern of examination question papers affected student performance;
(iii) the respondents believed that unfair means in examinations affected their performance; and
(iv) a lack of proper guidance affected their performance in examinations. On the basis of the study’s findings, the following recommendations were made to improve the examination system: Before the final exam, students could get proper exam training to avoid both overconfidence and exam phobia; the exam room could be calm and helpful to the students; the level of difficulty of the questions on the question paper could be just right, neither too easy nor too hard; and the person grading the paper could pay more attention to the quality of the student’s written work.
Jackie, one of our most knowledgeable people, is looking into how psychology could be used to make evaluations better.
Psychology is used in the creation and implementation of school tests. To inform practice, we do psychological research. Martin Johnson, Tori Coleman, and I recently used psychological research to show how to get the most out of an assessor’s judgement when judging complex competence. Several years ago, Irenka Suto and I investigated the cognitive techniques utilized in examination marking and their implications for GCSE marking.
Consequently, I maintain a current knowledge of psychology in order to determine how the most recent research influences practice. The emphasis of this essay is on two recent study studies regarding school exams.
The first paper, written by David Putwain and his colleagues, examines how pupils understand instructors’ “fear appeals” in GCSE Mathematics. When a teacher warns that certain activities may result in undesirable consequences, they use a fear appeal. For instance, “If you act irresponsibly in class, you won’t perform well on your math test, and you won’t have a great career.
” Fear appeals might be seen as a challenge or a threat by students. The fear appeal is often seen as a challenge by students who feel they can succeed and who place a premium on educational accomplishment. Students who want to do well in school but don’t believe in their own abilities tend to see the fear appeal as a threat.
Universities were compelled by the epidemic to initiate or expedite their online test delivery initiatives. These forced changes have often been distressing, resulting in tension and fatigue. Exams have been a major annoyance.
There are several tales of massive online test cheating epidemic. This ranges from humorous to dismal. Regardless, cheating causes complications for all concerned parties. Universities use artificial intelligence to ensure the integrity of students taking examinations. This, however, creates its own issues.
To properly identify, organize, and promote student learning, it is necessary to comprehend students’ successes. Evaluation is intended to contribute to this comprehension.
Exams are high-stakes occasions for producing large “chunks” of evidence of student accomplishment. Because of cheating, this evidence is no longer valid, which has effects on the individual, the course, and the programe levels.
Typically, studies of the previous year’s test results serve as a basis for academic programme evaluations. Using examination data, software modifications are made. If a large number of test scores are the result of cheating, this could lead to wrong assessments of the curriculum and mistakes in making future tests.
During the epidemic, what occurred?
The widespread use of remote proctoring by institutions is therefore comprehensible. This entails identifying and monitoring pupils during tests using software with artificial intelligence. The value proposition of remote proctoring is that it enables us to simply mimic virtually the security of an in-person, seated, proctored test, regardless of where our students are located. It seemed to be a pandemic-specific remedy.
There is evidence that remote proctoring is functioning as anticipated. However, we must also take into account new issues.
Students have reacted negatively to what they perceive to be invasive monitoring tactics. Universities’ unquestioning charges of cheating in “flagged” instances produced by monitoring software raises questions.
On the academic side, it is becoming more apparent that remote proctoring does not always result in fewer hours of work for staff. It may potentially raise exam-related responsibilities. Many students are dissatisfied with ANU’s decision to administer tests using remote software.
Working in educational assessment for the last two decades has taught me that test cheating is a significant and intricate problem. There are no simple answers. It is probable that remote proctoring will continue to play a role. It is crucial, however, that we define this position thoroughly and critically.
Why not return to previous practices?
With increasing enrollment and the reintroduction of in-person instruction, it is tempting to revert to traditional test procedures. Returning to conventional exams, however, promotes the return of additional well-documented, persistent issues.
Managing large-scale, in-person examinations is quite difficult. It is also challenging to ensure that conventional examinations reflect contemporary capabilities.
How satisfied were we with pre-pandemic examination procedures?
Putwain and colleagues investigated the statistical link between challenge and threat perceptions and GCSE Mathematics test outcomes and whether engagement mediates this association. They gathered the GCSE Maths test results of 579 students, as well as their replies to two surveys about engagement (on-task behavior, perseverance, and classroom involvement), unhappiness with learning, and whether fear appeals were seen as threats or challenges.
These were the findings:
- Engagement predicted GCSE Mathematics exam performance.
- When students saw fear appeals as challenges, they expected to do well because they were more motivated.
- When students saw fear appeals as a threat, they expected to do poorly because they didn’t care as much.
- These findings are a generalization and may not apply to every person.
Given these results and similar ones from other studies, the exams announced in India and his colleagues say that teachers should only use fear appeals for GCSE Maths with small groups or people who want to do well in school and think they can.
The second piece, authored by resulexams and a team, focuses on the characteristics measured by Australia’s national assessments. According to them, many standardized literacy and numeracy exams are criticized for perhaps evaluating working memory and nonverbal reasoning in addition to the targeted features.
In light of these critiques (NAPLAN), Howard and his colleagues investigated whether working memory and nonverbal reasoning contributed to success on Australia’s National Assessment Program tests of literacy and numeracy.
This is a nationally standardized exam in reading, writing, math, and language (punctuation, grammar, and spelling). In the research, 91 youngsters between the ages of 7 and 8 from a range of schools completed the national exams, a working memory test, and a nonverbal reasoning test. The results showed that nonverbal thinking and working memory affected how well the students did on the reading and math tests.
Key Takeaways:
- Stating which skills are evaluated by an evaluation
- Evaluating the anticipated skills
- Describing how the test results can be used