Wall Street Journal/Times Higher Education College Rankings 2020 methodology

Ranking of US universities and colleges puts student success and learning at its heart

August 29, 2019
US data map
Source: iStock

View the full results of the Wall Street Journal/Times Higher Education College Rankings 2020


The Wall Street Journal/Times Higher Education College Ranking is a pioneering ranking of US colleges and universities that puts student success and learning – based on more than 170,000 current student voices – at its heart.

The ranking includes clear performance indicators designed to answer the questions that matter most to students and their families when making one of the most important decisions of their lives – who to trust with their education. Does the college have sufficient resources to teach me properly? Will I be engaged, and challenged, by my teacher and classmates? Does the college have a good academic reputation? What type of campus community is there? How likely am I to graduate, pay off my loans and get a good job?

The ranking includes the results of the THE US Student Survey, which examines a range of key issues including students’ engagement with their studies, their interaction with their teachers and their satisfaction with their experience.

The ranking adopts a balanced scorecard approach, with 15 individual performance indicators combining to create an overall score that reflects the broad strength of the institution.

ADVERTISEMENT

For all questions about this ranking, please email:
usrankings@timeshighereducation.com


Data sources

Data come from a variety of sources: the US government (Integrated Postsecondary Education Data System (IPEDS), the US Department of Education’s Federal Student Aid (FSA), the College Scorecard, the Bureau of Economic Analysis (BEA), the THE US Student Survey, the THE Academic Survey, and the Elsevier bibliometric dataset.

ADVERTISEMENT

Our data are, in most cases, normalised so that the value we assign in each metric can be compared sensibly with other metrics.

Methodology

The overall methodology explores four key areas:

Resources

Does the college have the capacity to effectively deliver teaching? The Resources area represents 30 per cent of the overall ranking. Within this we look at:

  • Finance per student (11%)
  • Faculty per student (11%)
  • Research papers per faculty (8%)

Engagement

Does the college effectively engage with its students? Most of the data in this area are gathered through the THE US Student Survey. The Engagement area represents 20 per cent of the overall ranking.  Within this we look at:

  • Student engagement (7%)
  • Student recommendation (6%)
  • Interaction with teachers and students (4%)
  • Number of accredited programmes (3%)

Outcomes

Does the college generate good and appropriate outputs? Does it add value to the students who attend? The Outcomes area represents 40 per cent of the overall ranking. Within this we look at:

  • Graduation rate (11%)
  • Value added to graduate salary (12%)
  • Debt after graduation (7%)
  • Academic reputation (10%)

Environment

Is the college providing a good learning environment for all students? Does it make efforts to attract a diverse student body and faculty? The Environment area represents 10 per cent of the overall ranking. Within this we look at:

  • Proportion of international students (2%)
  • Student diversity (3%) 
  • Student inclusion (2%)
  • Staff diversity (3%)

Key changes since last year

Debt after graduation metric:

We have replaced the “value added to loan default” metric with the “debt after graduation” metric this year. The debt metric is based on the “debt after graduation” variable published by College Scorecard. 

Graduation rate metric:

The data we use to compile the graduation rate metric have changed from IPEDS’ graduation data to its new Outcomes Measures release. The new data provide outcomes variables for first-time and transfer students, and for part-time and full-time students. The previous dataset only covered first-time, full-time students. 

Rankings scope:

The smaller number of ranked institutions this year is due to ongoing challenges in distributing the student survey and in getting respondents to validate their responses. This year, we only ranked institutions for which we collected at least 50 validated responses. We hope to increase the number of ranked institutions next year.

ADVERTISEMENT

The bandings have changed since last year as a result. We now band institutions that are ranked below the top 400 (not the top 500 as previously). This means that the table only displays pillar ranks for universities that are ranked in the top 400 overall or ranked in the top 400 for that particular pillar. 


Metrics used

Resources (30%)

Students and their families need to know that their college has the right resources to provide the facilities, tuition and support that are needed to succeed at college.

By looking at the amount of money that each institution spends on teaching per student (11%), we can get a clear sense of whether it is well funded, with the money to provide a positive learning environment. This metric takes into account spending on both undergraduate and graduate programmes, which is consistent with the way that the relevant spend data is available in IPEDS. Schools are required by the Department of Education to report key statistics such as this to IPEDS, making it a comprehensive source for education data. The data on academic spending per institution are adjusted for regional price differences, using regional price parities data from the US Department of Commerce’s Bureau of Economic Analysis.

By looking at the ratio of students to faculty members (11%), we get an overall sense as to whether the college has enough teachers to teach. It gives a broad sense of how likely it is that a student will receive the individual attention that can be necessary to succeed at college, and also gives a sense as to potential class sizes. The source of this statistic is IPEDS. We are using the average of two years of data for this metric in order to provide a better long-term view.

Faculty who are experts in their academic fields and pushing the boundaries of knowledge at the forefront of their discipline can significantly enhance a student’s educational experience when they are able to distil their knowledge and demonstrate the power of real-world problem-solving and enquiry. So our teaching resources pillar also offers a sense as to whether faculty are experts in their academic disciplines by looking at research excellence. We look at the number of published scholarly research papers per faculty (8%) at each institution, giving a sense of their research productivity, and testing to see whether faculty are able to produce research that is suitable for publication in the world’s top academic journals, as indexed by Elsevier.

Engagement (20%)

Decades of research has found that the best way to truly understand teaching quality at an institution – how well it manages to inform, inspire and challenge students – is through capturing what is known as “student engagement”. This was described by Malcolm Gladwell in The New Yorker in 2011 as “the extent to which students immerse themselves in the intellectual and social life of their college – and a major component of engagement is the quality of a student’s contacts with faculty”.

THE has captured student engagement across the US through its US Student Survey, carried out in partnership with two leading market research providers. For 2018 and 2019, we gathered the views of over 170,000 current college and university students on a range of issues relating directly to their experience at college.

Students answer 12 core questions about their experience that are either multiple choice or on a scale from zero to 10, and also provide background information about themselves. The survey was conducted online and respondents were recruited by research firm Streetbees using social media, facilitated, in part, by student representatives at individual schools. We also worked with participating institutions that distributed the survey to random samples of their own students. Respondents were verified as students of their reported college using their email addresses. We used an aggregated group of respondents from both years (2018 and 2019 surveys). At least 50 validated responses in the 2019 survey were required for a university to be included.

To capture engagement with learning (7%), we look at the answers to four key questions:

  • to what extent does the student’s college or university support critical thinking? For example, developing new concepts or evaluating different points of view;
  • to what extent does the teaching support reflection on, or making connections among, the things that the student has learned? For example, combining ideas from different lessons to complete a task;
  • to what extent does the teaching support applying the student’s learning to the real world? For example, taking study excursions to see concepts in action;
  • to what extent do the classes taken in college challenge the student? For example, presenting new ways of thinking to challenge assumptions or values

To capture a student’s opportunity to interact with others (4%) to support learning, we use the responses to two questions: to what extent does the student have the opportunity to interact with faculty and teachers? For example, talking about personal progress in feedback sessions; and to what extent does the college provide opportunities for collaborative learning? For example, group assignments.

The final measure in this area from the survey is around student recommendation (6%): if a friend or family member were considering going to university, based on your experience, how likely or unlikely are you to recommend your college or university to them?

In this pillar of indicators we also seek to help a student understand the opportunities that are on offer at the institution, and the likelihood of getting a more rounded education, by providing an indicator on the number of different subjects taught (3%). While other components of the Engagement pillar are drawn from the student survey, the source of this metric is IPEDS. We are using the average of two years of data for this metric in order to provide a better long-term view.  

Outcomes (40%)

At a time when US college debt stands at $1.3 trillion, and when the affordability of going to college and value for money are prime concerns, this section looks at perhaps the single most important aspect of any higher education institution – their record on delivering successful outcomes for their students.

We look at the graduation rates for each institution (11%) – a crucial way to help students to understand whether colleges have a strong track record in supporting students enough to get them through their course and ensure that they complete their degrees. This year we are using reported graduation rates for all students including part-time and transfer students (see “Key changes since last year” section above).

This pillar also includes a value-added indicator, measuring the value added by the teaching at a college to salary (12%). Using a value-added approach means that the ranking does not simply reward the colleges that cream off all the very best students, and shepherd them into the jobs that provide the highest salaries in absolute terms. Instead it looks at the success of the college in transforming people’s life chances, in “adding value” to their likelihood of success. The THE data team uses statistical modelling to create an expected graduate salary for each college based on a wide range of factors, such as the make-up of its students and the characteristics of the institution. The ranking looks at how far the college either exceeds expectations in getting students higher average salaries than one would predict based on its students and its characteristics, or falls below what is expected. The value-added analysis uses research on this topic by Brookings Institution, among others, as a guide. 

A new metric this year is debt after graduation (7%). The concern over student debt and the cost of higher education in general has come to the forefront of public discussion recently. Introducing a measure of the debt accrued by a college’s students when they graduate reflects this concern and holds institutions accountable for the cost that they represent to individuals and funding sources. We are using the cumulative median debt reported in College Scorecard, which represents the “median loan debt accumulated at the institution by all student borrowers of federal loans”.

This pillar also looks at the overall academic reputation of the college (10%), based on THE’s annual Academic Reputation Survey, a survey of leading scholars that helps us determine which institutions have the best reputation for excellence in teaching. We used the total teaching votes from our 2017 and 2018 reputation surveys.

Environment (10%)

This category looks at the make-up of the student body at each campus, helping students to understand whether they will find themselves in a diverse, supportive and inclusive environment while they are at college. We look at the proportion of international students on campus (2%), a key indicator that the university or college is able to attract talent from across the world and offers a multicultural campus where students from different backgrounds can, theoretically, learn from one another.

We also look more generally at student diversity – both racial and ethnic diversity (3%), and the inclusion of students with lower family earnings (2%). For the former, we use IPEDS data on diversity. For the latter, we look at the proportion of first-generation students (students who are the first in their family to go to college) as reported in the College Scorecard. And we look at the proportion who receive Pell Grants (paid to students in need of financial support), as reported in IPEDS.

We also use a measure of the racial and ethnic diversity of the faculty (3%), drawing on IPEDS data.


Technical overview of metrics

Resources

  • Finance per student – spending on teaching associated activity per full-time equivalent student (IPEDS). This is adjusted using regional price comparisons (BEA)
  • Faculty-to-student ratio – the number of faculty per student as provided by IPEDS
  • Papers per faculty – the number of academic papers published by faculty from a college in the period 2013-2017 (Elsevier) divided by the size of the faculty (IPEDS)

Engagement

The data from the student survey have been rebalanced by gender to reflect the actual gender ratio at the college.

  • Student engagement – the average score of the four questions (critical thinking, connections, applying learning to the real world, challenge) in the THE US Student Survey
  • Interaction – the average score of two questions (interaction with faculty and collaborative learning) in the THE US Student Survey
  • Student recommendation (THE US Student Survey)
  • Subject breadth – the number of courses offered (IPEDS)

Outcomes

  • Graduation rate – the proportion of bachelor’s or equivalent graduates six/eight years after entry (IPEDS; six years for full-time students and eight years for part-time students)
  • Value-added salary – the average calculated residual of the value-added models for salary 10 years after entry. This is calculated using a range of independent variables for the College Scorecard data representing the years 2013, 2014 and 2015. It also draws on data from IPEDS and the BEA
  • Debt after graduation – the median loan debt accumulated by students at the institution, after they graduate. This is the GRAD_DEBT_MDN variable released by College Scorecard.
  • Reputation – the total votes received for teaching excellence from the THE Academic Reputation survey, which is conducted in partnership with Elsevier. We use only votes provided by academics associated with US institutions.

Environment

  • International students – the proportion of students identified as non-resident aliens (IPEDS)
  • Student diversity – a Gini-Simpson calculation of the likelihood of two undergraduates being from different racial/ethnic groups (IPEDS)
  • Faculty diversity – a Gini-Simpson calculation of the likelihood of two faculty members being from different racial/ethnic groups (IPEDS)
  • Student inclusion – the post-normalisation average of the proportion of Pell Grant recipients (IPEDS) and proportion of first-generation students (College Scorecard)

Why isn’t my college included?

There are two reasons why a college might not be included in the ranking.

First, does it meet the eligibility requirements? This is an abbreviated summary:

  • Title IV eligible
  • Awards four-year bachelor’s degrees
  • Located in the 50 states or Washington, DC
  • Has more than 1,000 students
  • Has 20 per cent or fewer online-only students
  • Is not insolvent

We also accept US service academies provided that they are able to supply the necessary data.

The second reason is missing data elements. Where possible we will impute missing values, but where that is not possible we have excluded colleges. In addition, some colleges did not meet our threshold for a valid number of respondents (greater than or equal to 50) to the student survey in 2019. We have also excluded private for-profit colleges.


The calculation of the Wall Street Journal/Times Higher Education College Rankings 2020 has been independently audited by professional services firm PricewaterhouseCoopers (PwC).

Read more about the PwC Wall Street Journal/Times Higher Education College Rankings 2020 report.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Reader's comments (3)

Does the salary received reflect the cost-of-living in that area (New England would require a higher salary)? Does the amount to be paid-off post graduation reflect the median income of parents sending their children. The amount to be paid back would be less for those coming from well endowed parents? Pell Grant applications would almost be automatically approved for those attending the Ivy's as the tuition is high. Why wasn't rankings be reflected by where those top 20 Americans attended college?
The value-added salary metric accounts for cost of living and several other variables including demographic indicators and institutional characteristics. On your second question, the data we use in that metric do not explicitly account for parents' median income, but the figures may be an indication of, or relative to, parents' income. If you have any further methodology questions please email usrankings@timeshighereducation.com.
How did you quantify "Diversity" as a element for ranking for 10%--pretty subjective, I believe. How did the rankings utilize "Engagement" as a non-subjective parameter? How is use of student surveys a subjective methodology?

Sponsored

ADVERTISEMENT