Skip to main content

Automating GPA and Hours for administrative purposes, University of Houston: the 'coogs' package

In the realm of institutional effectiveness, it is often necessary to batch process the hours earned and gpas of both the content area and cumulative area for undergraduates that are applying for particular majors in certain programs of study. Such calculations involve many students applying at one time for majors. Therefore, one can either calculate tens to hundreds of students at a time or automate the process.

To ease the process through automation, I have created a function in R called 'bulkgpa' in the 'coogs' package, available to the institutional effectiveness community at the College of Education at the University of Houston. 

The function is a hard worker. It takes three raw files directly from peoplesoft queries and cleanses them by eliminating unneeded columns, duplicated rows, and eliminates classes that have drop dates associated with them. 

Argument slots are created for raw data excel spreadsheets including transfer classes, transfer hours, UH courses, and the content areas for which content gpa must be calculated. Since different classes qualify for content gpa based on content areas, the fourth argument slot makes available user input to specify the content area for the content gpa calculation. Below is a list of possible options. When typing in the content area, case and spelling should be observed from the list below.


core-ec-6

art-ec-12

dance-6-12

math-4-8

math-7-12

elar-4-8

elar-7-12

chemistry-8-12

LOTE-spanish-ec-12

physics-math-7-12

life-science-7-12

physical-science-7-12

bilingual-generalist-ec-6

science

social-science

sped-ed-12

journalism

 

Popular posts from this blog

Digital Humanities Methods in Educational Research

Digital Humanities based education Research This is a backpost from 2017. During that year, I presented my latest work at the 2017  SERA conference in Division II (Instruction, Cognition, and Learning). The title of my paper was "A Return to the Pahl (1978) School Leavers Study: A Distanced Reading Analysis." There are several motivations behind this study, including Cheon et al. (2013) from my alma mater .   This paper accomplished two objectives. First, I engaged previous claims made about the United States' equivalent of high school graduates on the Isle of Sheppey, UK, in the late 1970s. Second, I used emerging digital methods to arrive at conclusions about relationships between unemployment, participants' feelings about their  (then) current selves, their possible selves, and their  educational accomplishm ents. I n the image to the left I show a Ward Hierarchical Cluster reflecting the stylometrics of 153

Creating Examination Question Banks for ESL Civics Students based on U.S. Form M-638

R and Latex Code in the Service of Exam Questions   The following webpage is under development and will grow with more information. The author abides by the GPL (>= 2) license provided by the "ProfessR" package by showing basic code, but not altering it. The code that is provided here is governed by the MIT license, copyright 2018, while respecting the GPL (>=2) license. Rationale Apart from the limited choices of open sourced, online curriculum building for adult ESL students (viz. elcivics.com), there is a current need to create open-sourced assessments for various levels of student understandings of the English language. While the U.S. Citizenship and Immigration Services (https://www.uscis.gov/citizenship) has valuable lessons for beginning and intermediate ESL civics learners, there exists a need to provide more robust assessments, especially for individuals repeating ESL-based civics courses. This is because the risks and efforts involved in applying

Bi-Term topic modeling in R

As large language models (LLMs) have become all the rage recently, we can look to small scale modeling again as a useful tool to researchers in the field with strictly defined research questions that limit the use of language parsing and modeling to the bi term topic modeling procedure. In this blog post I discuss the procedure for bi-term topic modeling (BTM) in the R programming language. One indication of when to use the procedure is when there is short text with a large "n" to be parsed. An example of this is using it on twitter applications, and related social media postings. To be sure, such applications of text are becoming harder to harvest from online, but secondary data sources can still yield insightful information, and there are other uses for the BTM outside of twitter that can bring insights into short text, such as from open ended questions in surveys.   Yan et al. (2013) have suggested that the procedure of BTM with its Gibbs sampling procedure handles sho