top of page

ABOUT US


What is Workforce Pell?

In our August blog post, we declared that workforce development was the ultimate focus of federal funding in 2025. This year, we see the implementation of legislative efforts which affirm this focus. In July 2025, Congress passed H.R.1, colloquially known as the One Big Beautiful Bill. Among myriad provisions affecting higher education, school accountability, and student funding, H.R.1 requires the Department of Education to award Workforce Pell Grants to students enrolled in eligible workforce training programs.


Eligible programs provide between 150 and 600 clock hours of instruction delivered over a minimum eight-week period. This is a significant expansion of the Pell Grant structure. Previously, Pell Grants were limited to undergraduate students without bachelor’s or professional degrees who were enrolled in programs lasting at least 15 weeks and 600 clock hours. Under Workforce Pell, students who already hold an undergraduate degree may still qualify for funding, provided they meet other requirements.

 

What Does This Mean for Institutions?

H.R.1 mandates that Workforce Pell be implemented by July 1st of this year, which creates some uncertainty for students, government employees, and educational institutions as regulatory and operational details continue to emerge. On January 9, the Higher Education and Access through Demand-driven Workforce Pell (AHEAD) Committee concluded its negotiated rulemaking process to define how eligible programs will be identified and evaluated.


The committee outlined a set of accountability requirements tied to program completion, employment outcomes, and post-completion earnings. These measures will be combined into a single earnings premium test, under which programs that fail to meet standards in two out of three years will lose Workforce Pell eligibility. Accrediting agencies will also be required to review Workforce Pell programs to ensure they meet quality standards. In addition to federal requirements, institutions will need to comply with state-specific criteria determining which programs align with workforce priorities.

 

How Should Institutions Prepare?

1. Evaluate existing programs - Programs lasting at least eight weeks and 150 clock hours may already meet Workforce Pell eligibility thresholds. Institutions should identify programs that qualify, those that may require modification, and gaps in current offerings.


2. Assess program outcomes and data capacity - Eligibility depends on completion, employment, and earnings outcomes. Institutions should assess whether they can reliably track these metrics, particularly for non-credit programs, and identify gaps in data systems or reporting processes.


3. Align programs with state workforce priorities - States will determine which programs qualify as high-skill, high-wage, or in-demand. Reviewing labor market data and employment needs will be vital for successful programming.


4. Plan for program development and funding opportunities - Institutions without eligible programs may consider developing new workforce pathways. Federal, state, and philanthropic grants are likely to support this work, many of which will require clearly defined outcomes and evaluation plans.


As your institution considers evaluating or expanding existing programs, or applying to grants to support your project, consider working with Shaffer Evaluation Group. Contact us today for a free 30-minute consultation: seg@shafferevaluation.com.


Students in blue work uniforms working with machinery in a classroom setting.

 

Across many of the grant evaluations we’ve conducted, which span capacity-building, workforce initiatives, and K-12 education projects, we've observed some common practices associated with successful projects. How many of these practices do you use with your grant? Use this quick quiz to find out.


How to Score


For each item, give your project:

2 = Yes, consistently

1 = Sometimes / partly in place

0 = Not yet


The 10-Practice Grant Success Quiz


1) We Have a Simple Logic Model or Theory of Change That Staff Can Explain in Plain Language


This is the “map” that keeps a project from becoming a long list of disconnected tasks. The best versions are short, visual, and actively used—not just filed away with the grant application. Logic models are widely recommended as tools for planning, communicating, and evaluating how activities connect to intended outcomes.


2) We’ve Narrowed Our Performance Measures to a Small Set That We Actually Use


Many grants struggle under the weight of too many indicators. Strong projects identify a manageable handful that directly reflect the outcomes in the logic model. This approach to performance measurement also improves reporting clarity and supports real improvement during implementation.


3) Our Roles and Decision-Making Are Clear


Successful grants rarely depend on heroic effort from one person. They succeed on clear lines of responsibility: who participates in decision-making, who owns each grant deliverable, and who is accountable when timelines slip.


4) We Run the Project with a Steady Cadence (Not Just Bursts Before Reports)


A predictable rhythm—monthly check-ins, short action logs, and visible next steps—keeps momentum and reduces last-minute chaos. This aligns with continuous improvement approaches that emphasize routine documentation and follow-through.


5) We Track Risks Early and Revisit Them


Staff turnover, procurement delays, partner shifts, and seasonal constraints can derail even well-designed projects. Proactive risk management is a recognized grant management practice to protect timelines, budgets, and compliance.


6) We Know Our “Core Components” and Protect Them


Strong programs are consistent where it matters most. That means identifying the few essential elements that must be delivered with fidelity, even if other parts of the program vary by site or participants.


7) We Allow Smart Local Adaptation—and Document It


The strongest projects don’t confuse flexibility with drift. They distinguish between acceptable adaptations and changes that would weaken the model. This “fidelity and fit” balance is especially emphasized in education grant contexts.


8) Our Data Process Is Realistic for Our Staffing and Context


Good data systems are simple, routine, and built to survive busy seasons, staff changes, and competing priorities. The best ones focus on a small set of measures (see #2), assign clear ownership, and follow a predictable schedule so data is ready well before reporting deadlines.


9) Partners Are Integrated into Implementation—not Just Listed in the Proposal


Effective partnerships show up in the work, not just the application narrative. Roles are clear, timelines are shared, and partners have regular touchpoints where they help solve problems and shape adjustments. When partners are truly embedded, they expand capacity and increase the odds that key activities will continue after funding ends.


10) We Began Sustainability Planning Early


This is one of the most consistent predictors of long-term impact. Sustainability should be planned from the start of a grant, not in the final year.


Your Score (0–20)


13–16: Good Foundation with a Few Stress Points. Your project is likely to deliver most core outcomes, but it may be vulnerable to turnover or scope creep.

0–8: High Risk, High Opportunity. The good news: small changes can make a big difference quickly.


If your score suggests a few of these practices need tightening, that’s good news. Most grants don’t need a redesign—they need a sharper operating system.


Partnering for Success


Shaffer Evaluation Group can help you build it. We support grantees with practical, right-sized evaluation and implementation tools, such as logic models that get used, lean measurement plans, and early sustainability roadmaps. Whether you need a quick mid-course tune-up or a full external evaluation, we’ll help you turn a good idea into a project that runs smoothly, proves its value, and lasts.



Engaging Stakeholders Early


One of the most valuable steps in any evaluation is collaborating with stakeholders who are directly involved in program implementation or data collection. While project directors often oversee planning, they may not be the ones executing the work or gathering data. By involving them early, project directors and evaluators gain insight into what’s feasible and what may need adjustment.


These leaders, embedded in the daily operations of their schools or departments, can identify who has the capacity to support implementation and data efforts—insights that are often missed without their input. Early engagement fosters a sense of ownership and accountability among stakeholders, which can enhance the evaluation process.


Being Strategic with Data Collection


More data isn’t always better. One of the most important lessons in evaluation is the value of being intentional about what data to collect—and why. Project directors should focus on gathering data that directly supports understanding of implementation progress. It’s essential to be mindful of the burden placed on both data collectors and respondents, such as survey participants.


A helpful starting point is to review what data are already being collected. Leveraging existing sources can reduce duplication, streamline efforts, and ensure that new data collection is purposeful and manageable. This strategic approach not only saves time but also enhances the quality of the evaluation.


Prioritizing Ongoing Communication


Consistent communication is essential for a successful evaluation. Project directors should regularly connect with those responsible for implementation to understand how things are progressing. This helps identify emerging needs and ensures staff have the time and resources to carry out their roles effectively.


These check-ins—whether through recurring meetings or informal in-person visits—build trust and foster a collaborative environment. Equally important is maintaining open lines of communication between the evaluator and the project director. Regular meetings not only provide valuable context about implementation but also create space for reflection, adaptation, and strategic decision-making.


Building Relationships for Effective Evaluation


Evaluation is as much about relationships and strategy as it is about data. Across diverse projects and grant programs, three lessons consistently stand out: the importance of stakeholder collaboration, the need for intentional and manageable data collection, and the value of ongoing communication.


When these elements are prioritized, evaluations become more responsive, grounded, and impactful—ultimately supporting programs in achieving their goals more effectively. A strong relationship between evaluators and stakeholders can lead to richer insights and a more nuanced understanding of program dynamics.


Conclusion: The Path Forward


As we continue to navigate the complexities of evaluation, these lessons remain at the forefront of our practice. By engaging stakeholders early, being strategic with data collection, and prioritizing ongoing communication, we can enhance the effectiveness of our evaluations.


This approach not only benefits the evaluation process but also strengthens the programs we aim to support. Together, we can make a bigger difference for underserved communities both in the US and globally.


Interested in working with Shaffer Evaluation Group? Contact us today for a free 30-minute consultation: seg@shafferevaluation.com.



Anchor 1

Shaffer Evaluation Group, 1311 Jamestown Road, Suite 101, Williamsburg, VA 23185   833.650.3825

Contact Us     © Shaffer Evaluation Group LLC 2020 

bottom of page