Browser-Based Tabulation System (BTS) : Its Efficiency To School-Related Activities
Browser-Based Tabulation System (BTS) : Its Efficiency To School-Related Activities
Volume: 38
Issue 6
Pages: 603-614
Document ID: 2025PEMJ3691
DOI: 10.70838/pemj.380605
Manuscript Accepted: 04-30-2025
Psych Educ, 2025, 38(6): 603-614, Document ID:2025PEMJ3691, doi:10.70838/pemj.380605, ISSN 2822-4353
Research Article
Introduction
Computers are still evolving and improving at the present time. It restores, shares and accesses information in the time by which
advance technology is continuing to create more than of what people expect that impossible. Technology develops computer to help
people for their uncertain needs. According to Hamilton (2016), the idea that a computer is a tool for doing computations or managing
actions that may be expressed in numerical or logical terms. In terms of business, the latest technological advancements include the
automation of tools and appliances as well as other computer-controlled systems, aiming to enhance human life, reduce human
workload, and simplify work.
Most companies, including the government, use automated systems and contemporary technology to service their customers quickly
and effectively. Even national television contests use precise tabulation systems to announce winners quickly and easily. The system
utilized during the national election through a vote counting machine, where the Commission on Election can proclaim winners within
a couple of days, was the most significant technology used in the nation recently (Afable et. al., 2020).
Tabulation is defined by Byjus (2020), as a systematic and logical representation of numeric data in rows and columns to facilitate
comparison and statistical analysis. The tabulation system provides effective, fast, and accurate results. Events and competitions are
extremely usual, particularly in school settings. The tabulation procedure is crucial in every competition. In the past, tabulators
manually calculated the results and recorded them on a paper that was prone to mistakes (Ontua et al., 2022).
While schools within the Department of Education frequently organize activities across various subjects, the current tabulation methods
remain predominantly manual. This manual process is time-consuming and prone to errors, which can significantly hinder efficient
event management and the accuracy of results (Agaylo et al., 2024). Despite the critical role of accurate and efficient tabulation in
ensuring fair and timely outcomes, there is a noticeable lack of automated systems designed specifically for educational institutions
(Hemavathi et al., 2024).
Moreover, there is limited research on the implementation of automated tabulation in educational institutions, despite its widespread
use in other sectors (Pivtorak et al., 2024). Most existing tabulation systems are designed for business and government applications,
with few tailored specifically for school competitions. Additionally, while studies acknowledge the inefficiencies of manual tabulation,
they do not sufficiently explore how automation can enhance event management in schools (Nobari et al., 2024). Furthermore, there is
a lack of research evaluating the impact of digital tabulation on accuracy and efficiency, leaving a gap in understanding its effectiveness
in improving result computation and processing time in educational settings (Khotimah et al., 2024).
Thus, a proposed Browser-Based Tabulation System was developed for school utilization during events or competitions. The suggested
system made tabulating scores easier. The system was used by the judges to enter their evaluations of each contestant. The system
generated the competition results and electronically tallied the scores. This research facilitated easy, quick, accurate, and practical
tabulation. The researcher hoped that this endeavor gives significant solutions to difficulties in every school event and competition that
paves in the way toward better results.
Research Objectives
Generally, this study aimed to develop and evaluate efficiency of the Browser-Based Tabulation System: Its Efficiency for School-
Methodology
Research Design
This study adopted a Developmental-Descriptive-Evaluative research design to guide the creation and assessment of the Browser-
Based Tabulation System (BTS). Each phase played a distinct yet interconnected role in ensuring that the system met the real needs of
its users—judges, tabulators, and IT experts—in managing school-related events with efficiency and accuracy.
The developmental phase centered on the thoughtful planning and construction of the system. Drawing from established software
development principles, the researchers identified user needs, translated them into functional system features, and built a working
prototype. This stage was more than just coding—it was about crafting a tool that could simplify and improve the tabulation process,
making it faster, more accurate, and user-friendly.
The descriptive component of the research aimed to capture and understand how users interacted with the system. As Glass and Hopkins
(1984) describe, descriptive research involves gathering and presenting data that explain what is happening.
In this study, feedback was collected using structured survey questionnaires, which explored how users perceived the system in terms
of accessibility, accuracy, compatibility, functionality, and reliability. This gave the researchers valuable insight into the real-world
experiences and satisfaction of those who used the BTS.
Finally, the evaluative phase focused on assessing how effective the system was in comparison to traditional tabulation methods. The
researchers not only examined the strengths and limitations of the BTS but also explored how well it performed in actual use.
Both qualitative and quantitative data were analyzed using appropriate tools, such as means and standard deviations, to draw meaningful
conclusions. Careful attention was given to selecting respondents, using appropriate sampling techniques, and ensuring that the findings
were both valid and reliable.
Overall, this comprehensive approach ensured that the study didn’t just develop a technological tool—it created, understood, and
evaluated a solution that addressed real challenges in tabulating school event results, aiming to contribute lasting value to its users and
the academic community.
Respondents
This study involved a total of ninety (90) respondents, consisting of thirty (30) IT experts, thirty (30) judges, and thirty (30) tabulators.
The selection of these specific groups was crucial in ensuring that the evaluation of the system was conducted from different
perspectives—technical, adjudicative, and operational. IT experts provided insights into the system's technological aspects, such as
security, functionality, and efficiency. Judges evaluated how well the system facilitated scoring and tabulation, while tabulators
assessed its usability and accuracy in recording and processing results.
Survey questionnaires were distributed to all participants at the beginning of the event or contest. This timing was strategically chosen
to allow respondents to assess the system in real-time while they were actively using it. By collecting immediate feedback, the study
aimed to capture firsthand experiences and observations, minimizing recall bias and ensuring that the responses accurately reflected
the system's performance during actual use.
The data gathered from these surveys enabled a thorough evaluation of the system across multiple dimensions, including functionality,
user experience, accessibility, and overall effectiveness. By analyzing the responses, the researcher was able to identify specific
strengths of the system, such as its efficiency in handling tabulations or its ease of use. Additionally, areas that required improvement,
such as potential technical glitches or user interface concerns, were pinpointed. This real-time assessment allowed for timely
adjustments and enhancements to the system, ensuring its continued optimization for future use in school-related activities.
The study used a stratified random sampling technique to ensure that each group of users was adequately represented in the sample.
The population of interest was divided based on the three (3) groups of users – IT experts, judges, and tabulators. To select the sample,
a random selection of participants was made from each category in proportion to the size of the group in the population. Specifically,
the research team selected thirty (30) IT experts, thirty (30) judges, and thirty (30) tabulators who were chosen randomly from their
respective populations.
This sampling technique ensured that the findings were generalizable to the population of users and that each group was adequately
represented in the sample. It enabled comparisons to be made between the three groups of users in terms of their perceptions of the
accessibility, accuracy, compatibility, functionality, and reliability of the tabulation system for school-related activities.
Instrument
To evaluate the developed system, an adopted and modified survey questionnaire was used. The survey questionnaire was distributed
to all groups of participants: IT experts, judges, and tabulators. The survey instrument was adopted from Agaylo et al. (2024) and
focused on the respondents' perception of the system's performance.
The survey questionnaire covered five (5) indicators each for accessibility, accuracy, functionality, portability, and reliability of the
system. Participants were asked to rate the indicators using a 5-point Likert Scale. The Likert Scale ranged from 1 to 5, where 1
represented "strongly disagree" and 5 represented "strongly agree."
The survey questionnaire was designed to gather honest and detailed observations from participants, aiming to assess the system's
performance across multiple dimensions. To achieve this, a Likert Scale was utilized, offering a standardized and structured approach
to data collection that ensured the reliability and consistency of the data gathered.
The inclusion of the Likert Scale allowed participants to rate their experiences and perceptions on a uniform scale, facilitating the
qualitative analysis of responses. This method enabled the researcher to capture insights into the system's functionality, user
satisfaction, and overall performance.
By adapting the survey questionnaire to incorporate the Likert Scale, the researcher ensured a comprehensive and effective means of
evaluating the developed system. This approach not only provided valuable qualitative data but also supported a thorough analysis of
the system's strengths and areas for improvement, ultimately contributing to the refinement and optimization of the system.
Table 1. Five Point Likert Scale for the System Evaluation
Scale Range of Means Description Interpretation
5 4.21 - 5.00 Strongly Agree Very Satisfied
4 3.41 - 4.20 Agree Satisfied
3 2.61 - 3.40 Moderately Agree Neutral /Uncertain
2 1.81 - 2.60 Disagree Not Satisfied
1 1.00 - 1.80 Strongly Disagree Very Not Satisfied
Procedure
The data collection process for this study adhered to thorough standard operating procedures approved by the Graduate School Dean
before the initiation of the study. To ensure compliance with ethical guidelines, a consent letter was prepared for all participants.
Before initiating the data collection, the researcher conducted a thorough review of the existing processes or flowchart related to the
system's objectives. This preliminary analysis was crucial for identifying potential areas for improvement and ensuring the relevance
and accuracy of the collected data. The researcher examined the relevant procedures on how the manual process was used to compute
the total percentage for the winning candidates or participants and consulted with subject matter experts to gain a comprehensive
understanding. The first step in the research process involved identifying the system's objectives. Based on these objectives, the
researcher determined the specific data requirements necessary for the study. To gather the required data, the researcher employed a
variety of methods, including surveys, interviews, and observations. After collecting the data, the researcher undertook a meticulous
validation process to ensure its accuracy and completeness. The researcher also documented the data comprehensively, including
information about its sources, collection methods, and any insights or findings.
Data Analysis
The process of analyzing data involved several key stages to ensure a thorough evaluation of the system’s performance. First, the data
was organized and tabulated to structure the information in a clear and accessible format, preparing it for deeper analysis.
To evaluate the system's Accessibility, Accuracy, Compatibility, Functionality, and Reliability, Mean and Standard Deviation were
employed. The Mean provided an average score for each criterion, offering insight into the overall performance. According to Toledo
(2024), the Standard Deviation measured the variability in responses, indicating the consistency of the system’s effectiveness across
different users.
Further analysis was conducted to examine the variation in assessments among the different user groups, such as IT experts, judges,
and tabulators. To achieve this, an ANOVA Test for the Significant Difference in the Level of Satisfaction of the Evaluators was used.
It compared the responses from these groups to assess whether significant differences existed in their evaluations of the system’s key
attributes. Additionally, to better understand the advantages and disadvantages of the developed system, Frequency and Ranking
Methods were applied. Frequency analysis identified how often certain advantages and disadvantages were reported by users, while
ranking helped prioritize these factors in order of importance or occurrence. This approach provided a clearer picture of the system's
strengths and areas for improvement, allowing for informed decisions on future enhancements.
Results and Discussion
This section presents the findings of the study, showcasing the collected data through a combination of tabular and textual formats to
ensure clarity and comprehensibility. The results are systematically organized in a logical sequence that aligns with the study’s research
objectives and problem statement.
The survey involved a total of ninety (90) participants, comprising thirty (30) IT experts, thirty (30) judges, and thirty (30) tabulators.
These respondents provided valuable insights into the performance of the Browser-Based Tabulation System (BTS), evaluating its
accessibility, accuracy, compatibility, functionality, and reliability.
Developed System Based on Accessibility
Accessibility is often integrated during the requirement assessment and testing phases of IS development, involving potential users to
ensure their needs are met (Teixeira et al., 2024). According to Sumak et al. (2023) integrating accessibility during the requirement
assessment and testing phases of Information Systems (IS) development is crucial for creating inclusive digital solutions. The
involvement of potential users, especially those with disabilities, ensures that their needs are adequately addressed. In order to assess
the accessibility of the developed system, respondents were asked to evaluate its performance using a list of indicators and the
corresponding results displayed in Table 1.
Table 2. Performance Rating of the Developed System Based on Accessibility
No. Items Mean Std. Dev. Interpretation
1 I can use the system comfortably on different devices (laptop, 4.76 0.48 Very Satisfied
mobile, tablet).
2 I do not experience issues or errors when using the system. 4.62 0.57 Very Satisfied
3 I can connect to one or more computers in a network. 4.73 0.49 Very Satisfied
4 I can be accessed using any browser. 4.71 0.50 Very Satisfied
5 I can print the tabulation result immediately in this system. 4.69 0.55 Very Satisfied
Mean 4.70 0.37 Very Satisfied
Table 1 presents the performance rating of the developed system in terms of accessibility, based on user feedback. Each item was
evaluated using a Likert scale, with responses averaged to determine the overall satisfaction level.
The results indicate a high level of user satisfaction, with all accessibility- related aspects receiving a mean score of 4.70, interpreted
as "Very Satisfied." The system's excellence can be attributed to several key factors. Firstly, users found the system comfortable to use
across multiple devices, including laptops, mobile phones and desktop computers. Secondly, the system exhibited minimal errors or
issues, ensuring a smooth user experience. Thirdly, it effectively supported network connectivity, allowing access to multiple computers
in a network.
Furthermore, the system was browser-compatible, enabling seamless access through any web browser. Lastly, users could conveniently
print tabulation results immediately, enhancing efficiency. Overall, the developed system's excellent rating, these findings suggest that
the system is highly accessible and user-friendly, meeting the needs of its users efficiently. The findings are corroborated by Bostic et
al., (2021), stressing that ease of access compliance with significantly less effort and expenditure meaning increasing the quality of the
applications. This aligns with research by Dix et al. (2004), who stated that reducing system errors enhances efficiency and user
satisfaction. Additionally, the system’s capability to connect within a and print tabulation results instantly demonstrates its practical
functionality in facilitating real-time data management.
Developed System Based on Accuracy
Accuracy in a tabulation system refers to the system’s ability to correctly process, count, and aggregate data without errors. According
to Masanori (2013), accuracy is a core requirement in any electronic voting system, ensuring that every vote cast is correctly recorded
and counted. In order to assess the accuracy of the developed system, respondents were asked to evaluate its performance using a list
of indicators and the corresponding results displayed in Table 3.
Table 10 presents the performance rating of the developed system in terms of accuracy, based on user feedback. Each aspect of accuracy
was assessed, and the results indicate a consistently high level of user satisfaction, with a mean score of 4.68, interpreted as "Very
Satisfied."
The system's excellence can be attributed to several key factors. Firstly, the system can exactly generate a list of judges’ information
and passwords, ensuring secure and reliable access. Secondly, it accurately records and processes scores from the judges, minimizing
errors in tabulation. Thirdly, users found the system effective in computing exact results and averages per event, reflecting precise data
handling. Furthermore, the system ensures the overall accuracy of event results, reducing discrepancies. Moreover, by maintaining fair
and precise scoring, the system enhances the integrity and fairness of the competition. Ensuring accurate score recording is critical to
the reliability of a tabulation system. Alvarez and Hall (2008) highlight that even minor errors in score entry or calculation can lead to
significant outcome discrepancies, emphasizing the need for robust data validation methods. Automated data entry and real-time error
detection mechanisms, such as those used in Scantegrity II (Chaum et al., 2008), have been shown to improve accuracy and reduce
human error in data collection.
Developed System Based on Compatibility
Software compatibility refers to the ability of a program to interact with different systems, applications, or environments without
functional issues. Ghezzi et al. (2002) define compatibility as the degree to which software can function correctly in varied
configurations, including hardware specifications, operating systems, and dependencies. In order to assess the compatibility of the
developed system, respondents were asked to evaluate its performance using a list of indicators and the corresponding results displayed
in Table 3.
Table 4. Performance Rating of the Developed System Based on Compatibility
No. Items Mean Std. Dev. Interpretation
1 I can access it from various devices. 4.77 0.46 Very Satisfied
2 The system functions well on different operating systems. 4.68 0.54 Very Satisfied
3 The system's performance is consistent across different platforms. 4.79 0.41 Very Satisfied
4 The system adapts well to different screen sizes and resolutions. 4.58 0.58 Very Satisfied
5 The system is compatible with various web browsers. 4.68 0.52 Very Satisfied
Mean 4.70 0.35 Very Satisfied
The table 3 presents the performance rating of the developed system in terms of compatibility, with all evaluated aspects receiving a
mean score of 4.70, interpreted as "Very Satisfied." These findings highlight the system’s strong adaptability across various devices,
operating systems, and platforms. The system's excellence can be attributed to several key factors. Firstly, the users confirmed that the
system is accessible from various devices, ensuring flexibility in usage. Secondly, the system functions smoothly across different
operating systems, enhancing its usability for a diverse audience. Thirdly, it maintains consistent performance across multiple
platforms, ensuring reliability regardless of the environment. Furthermore, the system adapts well to different screen sizes and
resolutions, optimizing the user experience across desktops, tablets, and mobile devices. Moreover, compatibility with various web
browsers ensures that users can access the system without technical limitations. With an overall mean of 4.70, these results indicate
that the system is highly versatile, user-friendly, and compatible, meeting the diverse needs of its users. These findings align with
literature suggesting that compatibility-focused software design enhances usability, minimizes technical barriers, and increases user
adoption (Pressman et. al., 2020). Additionally, system’s ability to function consistently across multiple platforms, devices, and
browsers aligns with established human-computer interaction (HCI) principles (Preece et al., 2015), ensuring greater accessibility and
usability. These results indicate that BTS can effectively support diverse users by offering seamless access and stable performance in
various digital environments, making it a reliable tool for school- related tabulation activities.
Developed System Based on Functionality
According to ISO/IEC 25010 (2011), functionality is one of the core quality characteristics of software, encompassing suitability,
accuracy, interoperability, compliance, and security. Pressman & Maxim (2020) further emphasize that software functionality should
align with user needs and industry standards to enhance usability and efficiency. In order to assess the functionality of the developed
system, respondents were asked to evaluate its performance using a list of indicators and the corresponding results displayed in Table
4.
The table 5 presents the performance evaluation of the developed system in terms of reliability, with all assessed aspects receiving a
mean score of 4.62, interpreted as "Very Satisfied." This reflects a high level of user confidence in the system’s ability to perform
accurate calculations, operate without errors, maintain data integrity, and ensure security.
The system's excellence can be attributed to several key factors. Firstly, the system performs reliable calculations, ensuring the accuracy
of tabulated data. Secondly, it operates without errors, preventing disruptions and improving user trust. Thirdly, the system maintains
data integrity, ensuring that information is processed and stored without corruption. Furthermore, robust security features protect
sensitive data, preventing unauthorized access. Moreover, the system generates precise outputs and reports, reinforcing its reliability
for event scoring and data management. With an overall mean of 4.62, the results indicate that the system is highly reliable, providing
users with accurate, secure, and consistent performance.
The results are consistent with the work of Morad et al. (2023), where the reliability of the system was assessed through the primary
Time to Failure data analysis techniques. The results indicated that the principal system reliability assessment groups generated
comparable curves; nonetheless, the semiparametric approach showed superior performance compared to the other methods. This
outcome highlights that this specific system reliability evaluation group is the most efficient method for intricate systems.
Overall Performance of the Developed System
The developed system was rated by the respondents according to its accessibility, accuracy, compatibility, functionality and reliability
to determine if this tabulation system shows improvement and eventually eliminates errors regarding the conventional way. The results
below show the overall performance of the developed system.
Table 6 illustrates the level of satisfaction of the evaluators towards the browser-based tabulation system in terms of accessibility,
accuracy, compatibility, functionality and reliability. The results show weighted mean for the systems components was the following;
accessibility = 4.70, accuracy = 4.68, compatibility = 4.70, functionality = 4.61, and reliability = 4.62. It implies that, on the average,
evaluators are highly satisfied with browser-based tabulation system based on the five components.
Moreover, the developed tabulation system demonstrates substantial improvements over conventional methods by offering higher
accessibility, accuracy, compatibility, functionality, and reliability. These results align with established software quality frameworks
(ISO/IEC 25010, 2011) and literature emphasizing the advantages of automation, usability, and system stability in modern software
development.
Level of Satisfaction of the Evaluators towards the System when grouped according to Types of Evaluators
In order to assess the level of satisfaction of the evaluators of the developed system, respondents were asked to evaluate its performance
using a list of indicators and the corresponding results displayed in Table 7.
Table 8. Level of Satisfaction of the Evaluators towards the System when grouped
according to Types of Evaluators
Variables Judges Tabulators IT Experts
(n - 30) (n - 30) (n - 30)
Overall Mean Remarks
Mean SD Mean SD Mean SD
Accessibility 4.82 0.25 4.42 0.40 4.87 0.25 4.70 Very Satisfied
Accuracy 4.69 0.41 4.49 0.42 4.84 0.30 4.67 Very Satisfied
Compatibility 4.83 0.24 4.45 0.37 4.81 0.31 4.70 Very Satisfied
Functionality 4.63 0.35 4.56 0.42 4.64 0.35 4.61 Very Satisfied
Reliability 4.69 0.29 4.51 0.33 4.65 0.40 4.62 Very Satisfied
Overall 4.73 0.24 4.49 0.28 4.76 0.25 4.66 Very Satisfied
Table 7 illustrates the level of satisfaction of the evaluators towards the browser-based tabulation system when grouped according to
types of evaluators. The results show weighted mean for the systems based on the following evaluators was judges = 4.73, tabulators
= 4.49, and IT Experts = 4.76. It implies that, on the average, the three types of evaluators are highly satisfied with browser-based
tabulation system.
Moreover, the results show that the three types of evaluators are all highly satisfied with browser-based tabulation system based on the
five components.
Result of One-Way ANOVA Test for the Significant Difference in the Level of Satisfaction of the Evaluators
The One-Way ANOVA test was conducted to determine if there were significant differences in satisfaction levels among IT experts,
judges, and tabulators regarding the developed system.
The F-value and p-value from Table 16 indicated whether the variations in satisfaction scores were statistically significant. If the p-
value was less than 0.05, it meant at least one group had a significantly different perception of the system. If the p-value was greater
than 0.05, all groups had similar satisfaction levels the corresponding results displayed in Table 8.
Table 9. Overall Result of the One-Way ANOVA Test of Difference
Overall Sum of Squares df Mean Square F P.Value Interpretation
Between Groups 1.38 2 0.69 10.36 .000 There is significant difference.
Within Groups 5.80 87 0.07
Total 7.17 89
Table 8 The One-Way ANOVA test was conducted to determine whether there is a significant difference between the means of three
or more independent groups. The analysis resulted in a between-group sum of squares of 1.38, with 2 degrees of freedom (df), leading
to a mean square value of 0.69. The within-group sum of squares was 5.80, with 87 degrees of freedom, producing a mean square value
of 0.07. The F-value, which measures the ratio of variance between groups to variance within groups, was calculated as 10.36.
The corresponding significance value (Sig.) was .000, indicating that the p- value is less than the conventional threshold of 0.05. This
Valderama & Antonio 609/614
Psych Educ, 2025, 38(6): 603-614, Document ID:2025PEMJ3691, doi:10.70838/pemj.380605, ISSN 2822-4353
Research Article
result suggests that there is a statistically significant difference between the groups, meaning that at least one group's mean differs from
the others. Consequently, the null hypothesis, which assumes no significant difference between group means, is rejected. These findings
suggest that at least one group significantly differs from the others, supporting the hypothesis that the means are not equal across the
compared groups.
This result aligns with previous studies that emphasize the utility of one-way ANOVA in determining significant variations among
independent samples. For instance, Jones and Carter (2022) demonstrated that ANOVA results provide valuable insights into
population mean comparisons, particularly when dealing with multiple categories. Additionally, the work of Brown et al. (2021)
highlighted the importance of post hoc tests following significant ANOVA results to pinpoint specific group differences. In line with
these studies, the current findings indicate a need for further analysis, such as post hoc comparisons, to determine which groups
contribute to the observed significant difference.
Result of Tukeys Test for Post Hoc Analysis for the Significant Difference
The developed system was analyzed furtherly using Tukeys test for Post Hoc Analysis to determine the significant difference in the
level of satisfaction of all evaluators towards the system when grouped according to types of evaluators. Tukey's test are typically
reported with p-values, where a p-value less than 0.05 indicates significant differences between groups (Sabuda, 2021).
Table 10. Tukeys Test for Post Hoc Analysis for the Significant Difference.
Pair of Evaluators p-Value Remarks
Judges and Tabulators 0.000 There is significant difference
Judges and IT Experts 0.829 There is no significant difference
Tabulators and IT Experts 0.000 There is significant difference
Table 9 shows Tukey's Honest Significant Difference (HSD) test is a widely used post hoc analysis method that helps identify specific
group differences following a significant One-Way ANOVA result. According to Tukey (1949), this test controls the family-wise error
rate and is appropriate for pairwise comparisons when equal sample sizes and variances are assumed.
The Tabulators, who serve as primary users of the Automated Tabulation System, typically have hands-on experience in managing and
organizing data during school activities or competitions. Their role involves inputting scores, ensuring data accuracy, checking for
consistency, and finalizing results under time constraints. Due to the nature of their tasks, tabulators often rely heavily on the system’s
usability, speed, and intuitive interface. They are more likely to notice practical issues, inefficiencies, or features that enhance or hinder
their workflow. Their assessment of the system tends to be based on day-to-day usability, responsiveness, and how the system supports
their performance in real-time scenarios.
On the other hand, the IT Experts who developed or are closely familiar with the system possess a deep understanding of its technical
architecture, coding, and underlying functionalities. They evaluate the system from a developer’s perspective, focusing on aspects such
as system design, functionality, performance optimization, and error handling. Their familiarity with how the system operates "behind
the scenes" may lead them to overlook or interpret differently the issues faced by end-users like tabulators. As developers, their
evaluations are typically grounded in technical stability, system logic, and efficiency of execution rather than user experience alone.
This contrast in perspectives explains the significant difference between the Tabulators and IT Experts, as revealed by the Tukey HSD
test (p = 0.000). Tabulators evaluate the system based on usability and workflow efficiency, while IT Experts assess it from a technical
and developmental standpoint. These differing lenses naturally lead to varied assessments of the same system.
Furthermore, the lack of significant difference between Judges and IT Experts (p = 0.829) could imply that Judges—being occasional
users or observers—may align more with the high-level perception of the system's functionality, similar to how developers perceive it,
without diving into the operational nuances experienced by regular users like tabulators.
These findings support the notion that while some groups exhibit distinct perspectives, others share comparable views, reinforcing the
necessity of post hoc analysis in understanding group-specific variations after an ANOVA test (Field, 2013).
Overall Result in Significant Differences of the Developed System
In order to evaluate the overall results of the significant differences of the developed system, the results displayed in table 10.
Table 11. Overall Result in Significant Differences of the
Developed System.
Indicators N Mean Std. Dev. Interpretation
Accessibility 30 4.70 0.222 Very Satisfied
Accuracy 30 4.68 0.292 Very Satisfied
Compatibility 30 4.70 0.176 Very Satisfied
Functionality 30 4.61 0.241 Very Satisfied
Reliability 30 4.62 0.227 Very Satisfied
Total 150 4.66 0.235 Very Satisfied
The table 18 shows the overall results of the evaluation indicate that respondents were very satisfied with the developed system across
all key indicators: accessibility, accuracy, compatibility, functionality, and reliability. The mean scores for each category ranged from
4.61 to 4.70, demonstrating a consistently high level of user satisfaction. Accessibility and compatibility received the highest mean
score of 4.70, with a standard deviation of 0.222 and 0.176, respectively. This suggests that users found the system easy to navigate
and well- integrated with existing processes.
The accuracy of the system was also highly rated (M = 4.68, SD = 0.292), indicating that users perceived the system as providing
precise and reliable results. Functionality (M = 4.61, SD = 0.241) and reliability (M = 4.62, SD = 0.227) also received very satisfied
ratings, reinforcing the system’s ability to perform as expected while maintaining dependability in various usage scenarios.
The overall mean satisfaction score of 4.66 (SD = 0.235) confirms that respondents found the system to be effective, user-friendly, and
reliable in facilitating tabulation tasks. The relatively low standard deviation values across all indicators suggest a strong agreement
among users, reinforcing the system’s consistency in meeting their expectations. These findings align with previous studies (DeLone
& McLean, 2003; Venkatesh et al., 2003) on user satisfaction in information systems, which emphasize that systems with high usability,
accuracy, and reliability tend to receive positive evaluations.
In summary, the developed system was highly rated across all dimensions, with users expressing a strong level of satisfaction regarding
its performance, ease of use, and accuracy. The consistently positive feedback suggests that the system is well-suited for its intended
purpose, with minimal concerns from users regarding its functionality or reliability.
Advantages and Disadvantages of a Developed System
The developed system was rated by the respondents according to its advantages and disadvantages compared to manual tabulation.
Advantages Disadvantages
Figure 1. Word Cloud
Figure 1 shows the developed system enhances the efficiency and accuracy of tabulation processes by offering increased speed,
minimizing errors, and ensuring fairness in scoring. Its user-friendly interface makes it accessible, while automation reduces manual
work and improves efficiency. The system supports multi-device and multi-browser accessibility, processes large data sets quickly,
ensures data security, and generates immediate reports for transparency.
However, the system has some drawbacks, including ranking-based scoring limitations, potential downtime, and technical knowledge
requirements. Users may face challenges with technical failures, maintenance costs, and resistance to change. Regular updates, training,
and software upgrades are necessary to maintain performance and security. Browser dependency and troubleshooting needs can also
pose issues.
Overall, while the system significantly improves accuracy, efficiency, and usability, addressing its limitations through continuous
updates and training will enhance its effectiveness. This joined insights garnered from survey respondents, encompassing their
viewpoints and evaluations of the advantages and disadvantages of the developed. Essentially, it offers a comprehensive overview of
the sentiments, perspectives, and thoughts expressed by those who took part in the survey regarding the overall performance of the
system.
Conclusions
Based on the capstone project, the study’s findings lead to the following conclusions:
The Browser-Based Tabulation System (BTS) performs very well. The system effectively met it goals, demonstrating strong in
accessibility and compatibility when it comes to the usability of the system. Among the groups, IT experts gave the highest rate followed
by judges and tabulator. Statistical analysis revealed varying level of satisfaction particularly among tabulators compared to other
evaluators.
The study also highlighted key advantages of the system, including faster tabulation, ease of use, real-time accessibility, reduced
paperwork, and enhanced security. However, some limitations were identified, such as browser dependency, occasional technical
failures, and the need for user training. While the system provides a more efficient alternative to manual tabulation, continuous
improvements and support mechanisms are necessary for optimal performance.
Based on the conclusion drawn from the study, the researcher of this capstone project would like to recommend the following: To
maintain and enhance performance, continuous system improvements to be implemented, particularly in functionality and reliability,
which had slightly lower ratings. Providing user training and support can help maximize system efficiency, while regular performance
optimization will ensure accessibility and compatibility across platforms. Enhancing accuracy through automated error detection and
validation mechanisms can further improve data integrity. Additionally, establishing a structured user feedback system will help
identify areas for future enhancements. These recommendations aim to sustain the system’s high satisfaction levels and ensure long-
term efficiency and reliability. To further improve the system, it is recommended to address any usability concerns raised by tabulators
through additional training, interface enhancements, or system optimizations. Ensuring continuous improvements based on user
feedback will help maintain high satisfaction levels across all evaluator groups. The results indicate significant differences in system
evaluation between Judges and Tabulators, as well as between Tabulators and IT Experts, while no significant difference was found
between Judges and IT Experts. This suggests that Tabulators perceive the system differently compared to the other groups. To address
this, further investigation should be conducted to understand the specific concerns of Tabulators and implement necessary
improvements. Enhancing system features, providing additional training, or refining the user interface based on their feedback can help
bridge the gap in perception and ensure a more consistent user experience across all evaluator groups. To maximize the system’s
benefits, it is recommended to provide user training, establish a reliable support system for technical issues, and implement regular
maintenance and updates. Addressing these challenges will ensure a smoother user experience and wider adoption of the system.
References
(2020). Retrieved from byjus: https://byjus.com/commerce/meaning-and- objective-of-tabulation/
Afable, S. M., & Quiloña, J. G. (2020). Multi-user Automated Pageant Tabulation System. Open Access, 733-735.
Afable, S. M., & Quiloña, J. G. (2020). Multi-user Automated Pageant Tabulation System. International Journal of Engineeringand
Advanced Technology (IJEAT), 733-736.
Afandi, M., Hartono, M., & Hanani, E. S. (2021). Development of Scoring System for Archery Competition Results Website Based.
Journal of Physical Education and Sports, 10(3).
Agaylo, J. J., Kaindoy, A. J., Caliao, F. M., & Lintao, D. S. (2024, May 28). Journals for Educators, Teachers and Trainers. Retrieved
from jett.labosfo: https://jett.labosfor.com/index.php/jett/article/view/2271/1312
Agoylo, J. C., Kaindoy, A. B., Caliao, F. V., & Lintao, D. S. (2024). Development Of Dynamic Event Tabulation System. Journal for
Educators, Teachers and Trainers, 252-268.
Aljarrah, E., Elrehail, H., & Aababneh, B. (2016b). E-voting in Jordan: Assessing readiness and developing a system. Computers in
Human Behavior, 63, 860–867. https://doi.org/10.1016/J.CHB.2016.05.076
Anthony, J. S. (2024). Acceptability of AutoBeaut: An Automated Judging System for Beauty Pageants Throughout the Five Years
Operation. Journal of Innovative Technology Convergence, 75-84.
Araújo, A. C. M. de, Lopes, J. R. S., & Oliveira, B. V. C. (2022 Introduction to general systems theory. Brazilian Journal of
Development, 8(4), 31540– 31546. https://doi.org/10.34117/bjdv8n4-573
Armstrong, E., Hewson, J. C., & Sutherland, J. C. (2022). Characterizing Tradeoffs in Memory, Accuracy, and Speed for Chemistry
Tabulation Techniques. Combustion Science and Technology, 2614-2633.
Arruejo, A., & Romano, M. (2024, April 30). UNP optimizes tabulation for Ms. San Vicente 2024. Retrieved from University of
Northern Philippines: http://unp.edu.ph/ccit-at-ms-sanvicente/
Balilo Jr., B., Rodriguez, R., & Guerrero, J. J. (2023). Insights and Trends of Capstone Project of CS/IT Department of Bicol University
Valderama & Antonio 612/614
Psych Educ, 2025, 38(6): 603-614, Document ID:2025PEMJ3691, doi:10.70838/pemj.380605, ISSN 2822-4353
Research Article
from 2011 to 2019. International Journal of Computing Sciences Research, 7, 1255–1272. https://doi.org/10.25147/ijcsr.2017.001.1.99
Bañoza, D. A., Driz, L. S., & Cuneta, C. R. (2023). Retrieved from manila.lpu.edu.ph:
https://manila.lpu.edu.ph/publications/information-technology-e-journal/an-integrated-web-based-e-voting-system-with-e-mail
notification-and-real-time-visualization/
Barrios, E. T., Eman, R. C., & Paasa, P. P. (2011). A Consolidated Web Browser Interface Using Multiple Browser Libraries for
Testing Web Designs. Philippine E-Journals, 163-274.
Barrios, E. T., Eman, R. C., Angelo, P., & Paasa, P. (n.d.). A Consolidated Web Browser Interface Using Multiple Browser Libraries
for Testing Web Designs. In UIC Research Journal.2011 (Vol. 17, Issue 2).
de Camargo Fiorini, P., Roman Pais Seles, B. M., Chiappetta Jabbour, C. J., Barberio Mariano, E., & de Sousa Jabbour, A. B. L. (2018).
Management theory and big data literature: From a review to a research agenda. International Journal of Information Management, 43,
112–129. https://doi.org/10.1016/J.IJINFOMGT.2018.07.005
Doña, M., Ontua, V. B., Mangadlao, J. Q., James, C., Palomar, M., & Piala, J. T. (2022). Online Tabulation System For Repository Of
Events And Competition For Saint Francis Xavier College A Capstone Project Presented To.
Election Assistance Commission, U. (2022). Election Technology Best Practices. www.eac.gov
Haidar, A. (n.d.). Computers In Our Daily Life. In International Journal of Computer Science and Information Technology Research
(Vol. 9). www.researchpublish.com
Hamilton, B. (2016, December 11). StudyMoose. Retrieved from studymoose.com/tabulation-system-essay :
https://studymoose.com/tabulation-system-essay
Journal, S., & Journal, U. (2020a). International Journal of Engineering & Advanced Technology (IJEAT).
https://doi.org/10.35940/ijeat.B3833.029320
Kaaba, O., & Mwanangombe, P. (2021). Parallel Vote Tabulation in Zambia. Zambia Electoral Analysis Project (ZEAP) Briefing Paper
Series.
Khotimah, K., Bahtiar, M. D., Ningsih, Y. F., & Arief, N. A. (2024). Advancing Efficiency, Transparency, and Accuracy of Digital
Quality Assurance Systems in Higher Education. https://doi.org/10.31219/osf.io/frbaw
Lei, Y., Bossut, C., Sui, Y., & Zhang, Q. (2024). Context-Free Language Reachability via Skewed Tabulation. Proc. ACM Program.
Lang, 26.
Mallo-Eustaquio, G. G. (2019). Design and Implementation of EMRC Web-based Portal. PAARL Research Journal, 1-34.
Mercuri, R. (2001). Electronic Vote Tabulation. Pennsylvania,.
Michael, M., & Orozco, M. (n.d.). Web-Based Thesis/ Capstone Project Defense Evaluation System of the CCS Biñan.
Miñan, G. S., Moreno, J. A., & Fernandez, X. D. (2024). LIA Method for the Application of Microsoft Excel in Data Tabulation in
Systematic Reviews. Research Gate.
Naimi, A. I., & Westreich, D. J. (2014). Big Data: A Revolution That Will Transform How We Live, Work, and Think. American
Journal of Epidemiology, 179(9), 1143–1144. https://doi.org/10.1093/aje/kwu085
Nguyen, Q., Butler, H., & Matthews, G. J. (2022). An Examination of Olympic Sport Climbing Competition Format and Scoring
System. Journal of Data Science, 20(2). https://doi.org/10.6339/22-JDS1042
Nobari, A., & Rafiei, D. (2024). TabulaX: Leveraging Large Language Models for Multi-Class Table Transformations.
https://doi.org/10.48550/arxiv.2411.17110
Nu’man, A. (2012). A Framework for Adopting E-Voting in Jordan. In Electronic Journal of e-Government (Vol. 10). www.ejeg.com
Orioque, J. A., Cabardo, J., & Selpa, H. D. (2021). Web-based Scoring System. Proceedings - 2021 International Conference on
Innovative Technology Convergence, CITC 2021. https://doi.org/10.1109/CITC54365.2021.00014
Oyelude, O., & Olojede, I. (2023). Evaluating the Effectiveness of Electronic Voting Systems in Nigeria: Challenges and Opportunities.
African Journal of Politics and Administrative Studies, 16(2), 84–104. https://doi.org/10.4314/ajpas.v16i2.5
Pitogo, N. S. (2024). Enhancing Administrative Efficiency Through the DepEd Caraga Regional Office Information Systems Portal.
International Journal of Computing Sciences Research, 2822-2840.
Pivtorak, D. (2024). Development of the prototype of the automated system for creating accompanying documents of the educational
process. Вестник Национального Технического Университета “ХПИ,” 2(20), 30–37. https://doi.org/10.20998/2413-
4295.2024.02.05
Rand, L., Boyce, T., & Viski, A. (n.d.). Title: Computer Vision Report Title: Emerging Technologies and Trade Controls: Report
Subtitle: A Sectoral Composition Approach. research/voting-technology. (2023, April 21). Retrieved from electionlab.mit.edu:
https://electionlab.mit.edu/research/voting-technology
Sabuda, M. (2021). Supplementary Table 3 - TukeyHSD Statistical Comparisons for Results at Experiment End.
https://doi.org/10.6084/m9.figshare.14714373
Sotio Bryan, R. A., & Hinong, J. D. (2020). Eserve: Centralized Integrated Student Services With Interactive Dashboard.
Toledo, F. (2024, September). Abstracta. Retrieved from https://abstracta.us/blog/performance-testing/performance-testing-metrics
Vimala, V. (2023). Research Methodology and Statistics for Home Science. Xiang, C. (2021). Comparative the Scoring System of
Bodybuilding Competition Based on ICT Technology. ACM International Conference Proceeding Series.
https://doi.org/10.1145/3495018.3495343
Zakiati, E., & Rizky, M. (2022). Analysis of School-Based Financing Management. Economic Education Analysis Journal, 11(2), 217–
232. https://doi.org/10.15294/eeaj.v11i2.58100