Staff Perceptions of Variables Affecting Performance ... - SAGE Journals

40 downloads 141 Views 305KB Size Report
throughputs (management and service delivery processes, framed here as ... human service, organizational performance measurement, program capacity,.
Articles

Staff Perceptions of Variables Affecting Performance in Human Service Organizations

Nonprofit and Voluntary Sector Quarterly 39(6) 971­–990 © The Author(s) 2010 Reprints and permission: http://www. sagepub.com/journalsPermissions.nav DOI: 10.1177/0899764009342896 http://nvsq.sagepub.com

Thomas Packard1

Abstract This article summarizes results of a study of programs providing workforce and educational development services for high-risk youth. A model of management functioning and program performance is used as a structure for program staff to rate the relative importance of selected inputs (clients, staff, resources, etc.) and throughputs (management and service delivery processes, framed here as management and program capacity) as they affect results. Factors seen to be most important in affecting performance included adequate funding for the program, leaders having a positive attitude, staff being motivated and committed, a facilitative organizational structure, and a budgeting process which ensures effective resource allocation. Because performance measurement is seen as multidimensional and socially constructed, findings also include respondents’ opinions on the most relevant measures of performance. These results can provide insights and guidance to researchers and agency managers regarding studying and improving organizational performance. Keywords human service, organizational performance measurement, program capacity, management capacity

Background and Purpose Organizational performance, a compelling concern of all organizations (Baruch & Ramalho, 2006), is particularly complicated for human service organizations (HSOs). These organizations may be governmental, nonprofit, and even for profit, but their key 1

San Diego State University, San Diego, CA

Corresponding Author: Thomas Packard, Associate Professor, School of Social Work, San Diego State University, 5500 Campanile Dr., San Diego, CA 92182 Email: [email protected]

972

Nonprofit and Voluntary Sector Quarterly 39(6)

commonality is that their mission is to transform people, by ameliorating, preventing, or otherwise addressing problems such as child abuse, mental illness, substance abuse, homelessness, and poverty. Because they are largely funded by taxpayers, through provision by government agencies or contracts with nonprofit and for-profit organizations, and by donors of all types, they have multiple stakeholders to whom they need to demonstrate their value (Herman & Renz, 1997). Performance measurement in HSOs, with sometimes involuntary clients and with varying expectations from multiple stakeholders, is arguably more complex than in business organizations or even in other nonprofits, such as arts and culture organizations. These uniquenesses of HSOs, as distinct from the other sectors mentioned above, point to the need for a model which shares common features (e.g., management processes) with other organizations but addresses HSO uniquenesses such as client characteristics (inputs) and the throughput processes which involve changing people with complex problems. Determining factors affecting organizational performance in the human services has been a challenging and important field of study for decades (Patti, 1987; Schalock & Bonham, 2003; Stone & Cutcher-Gershefeld, 2001), and the need for improved performance and accountability in the human services has been amply discussed in the literature. Forbes (1998) reviewed studies of organizational effectiveness in nonprofit organizations from 1977 to 1997. Light (2002) recently suggested that the future of nonprofits may depend heavily on their performance as perceived by key stakeholders. To improve performance, it must first be defined and described and then measured. In addition to addressing performance outcomes, factors which may affect performance need to be considered. These include inputs such as staff and client characteristics and throughputs (processes within the organization). This creates a foundation for assessing and improving performance, adding value for clients served and society as a whole. This study addresses how performance is defined and measured and the factors which are seen as impacting it, including staff and client characteristics, management capacity, and program capacity. The purpose of this article is to offer new knowledge regarding factors seen as impacting performance, generated from data gathered from programs providing workforce and educational development services for high-risk youth. Specific questions to be addressed are as follows: 1. What factors are seen by program staff as affecting program performance? 2. What do these respondents consider to be the most appropriate ways to measure performance in these programs? These findings can help provide a foundation for further research and for initia­ tives to improve factors to improve outcomes for clients. Specifically, as discussed below, the model presented here integrates and augments existing models for HSOs, providing a more comprehensive framework for further research. As also noted below, studies to date have looked at small numbers of variables, and eventually new knowledge will be needed to show the relationships among a fuller set of factors.

Packard

973

Defining and Measuring Performance Research and theory development in organizational performance in HSOs is a dynamic field of study, with many key discoveries yet to be made. Organizational performance is now seen as multidimensional and socially constructed (Herman & Renz, 1997): performance can be assessed in terms of goal accomplishment, efficiency or cost effectiveness, acquisition of key resources (e.g., funding), environmental adaptation, satisfaction of key stakeholders (e.g., board members, regulators, funding sources), and internal processes (e.g., organizational learning, staff morale, and organizational culture). Management outcomes can be reflected in financial health and employee satisfaction (Sowa, Selden, & Sandfort, 2004). Program performance can be described in terms of outputs, outcomes, productivity, and quality. There are important differences between these four concepts. Outputs, units of services provided or completion of a standardized program, are easy to measure but say nothing about actual changes in a client. Outcomes measure change in a client’s quality of life, such as obtaining a job or housing. Client satisfaction is sometimes used as an indicator, but, as noted below, it is not often related to client outcomes. Productivity is usually measured in terms of unit cost or efficiency (cost per output; Martin & Kettner, 2009). Quality can be measured with reference to a defined standard such as a living wage job (an outcome) or implementation of evidence-based practice standards (an output; Austin & Claasen, 2008). There are several streams of research which are relevant to defining and measuring the performance of HSOs. Regarding the final stage of the process, there is, of course, a vast literature on program evaluation. Studies range from longitudinal experimental designs with control groups to individual case studies. The extent to which such evaluations open the “black box” of internal organizational processes varies widely, usually with little attention to internal factors. If consumers of evaluation want to know why something works, it will be necessary to open the black box to derive knowledge which will be useful in other settings. Use of variables described here can augment program evaluation studies by providing new knowledge about why a program is effective. Other relevant areas of research and practice include the balanced scorecard methodology (Niven, 2003), evidence-based practice (McNeece & Thyer, 2004), best practices (Manela & Moxley, 2002), performance-based contracting (Martin, 2005), and outcomes measurement (Lampkin & Hatry, 2003). Ultimately, it is becoming increasingly clear that, to acquire a comprehensive picture of organizational performance, elements of all these streams of research will need to be considered.

Factors Affecting Performance in HSOs This section draws on some of the key research to date and relevant theories in HSOs to suggest a comprehensive model of organizational performance. This model is generic in the sense that it can be applied to a range of HSOs, from child welfare

974

Nonprofit and Voluntary Sector Quarterly 39(6)

and mental health to workforce development. It would not be fully appropriate in nonprofit organizations such as arts and culture organizations, which have transformation processes (e.g., art exhibits and concerts) and clients who are very different from those in HSOs. Testing the model as a whole would be a huge undertaking. This model is intended to serve as a conceptual starting place to provide a framework for testing the pieces of the model which show the most promise for improving performance and which have not yet been adequately studied. Theories regarding the factors affecting performance are varied and complex, ranging from service delivery models and staff capabilities to management processes such as planning, leadership, and information systems. Yoo et al. (2007) recently integrated conceptually many aspects of organizational performance in child welfare organizations, grouping factors into four “organizational constructs,” which they saw as “potentially the strongest predictors of service effectiveness” (p. 67). Their factors included organizational structure, environmental factors, working conditions such as social support and leadership, worker characteristics, and workers’ response to the work environment (e.g., job satisfaction). To date, most studies in this area address only parts of the system, such as individual variables (Koeske & Koeske, 2000), organizational structure (Schmid, 2002), agency boards (Herman & Renz, 1998), or service delivery methods. Sowa et al. (2004) advanced thinking in this area, making distinctions between processes/structures and outputs, and between management capacity and program capacity. Other research has considered a similarly broad range of variables. A study by Yoo and Brooks (2005) reflected the complexities of performance, addressing the organizational context (leadership, job satisfaction, burnout, organizational commitment, locus of control, and routinization), client characteristics, and client outcomes. Rubenstein, Schwartz, and Steifel (2003) have presented a methodology for adjusting performance measures based on variation in client characteristics. Glisson and Hemmelgarn (1998) studied relationships between community context and conditions, organizational climate, services coordination, interorganizational relationships, service quality, and service outcomes. They found that client improvements were greater when staff offices had more positive climates. Climate factors also correlated with service quality, but, highlighting the complexities of performance measurement, service quality was not significantly correlated with client outcomes. Glisson and James (2002) found that “more constructive team cultures were associated with more positive work attitudes, higher service quality, and less turnover” (p. 788). Letts, Ryan, and Grossman (1999) not only discussed factors affecting performance but also addressed the importance of building organizational capacity to improve performance. However, no studies, except perhaps an extensive study by Olmstead and Christensen (1973), have comprehensively addressed the range of variables which can be shown to affect performance. The literature in the field studied here—youth employment and workforce development—is not as advanced as that reflected in the other studies cited here but

Packard

975

rather is at the stage of developing best practices for testing. There have been program evaluations (Brown & Thakur, 2006) and action research projects reported by technical assistance organizations (National Youth Employment Coalition, 2005) but no studies of organizational factors reported in the academic literature. Much remains to be learned, in a comprehensive way, about which organizational variables (specifically, factors including management and service delivery processes and systems, administrative leadership, and staff characteristics) do, in fact, relate to organizational performance, in what contexts and combinations, and under what conditions. For example, leadership has been much studied in both the business and human services literatures, but dependent variables are often narrowly focused on factors such as job satisfaction. Organizational outcome studies have a similar limitation in that they often measure only one organizational outcome and often only selected independent variables, which does not reflect the complexity of how organizations operate. To summarize, the literature on HSOs includes a number of studies of organizational factors as they affect organizational climate, job satisfaction, and program evaluations which report on outcomes, but much less has been reported about how specific organizational factors actually affect ultimate outcomes. This issue is particularly important for HSOs, as distinct from other nonprofit organizations, because so many HSOs now receive substantial funding from government contracts, which have precise requirements for services and results expected. Further research in this area should lead to opportunities for improving performance in HSOs.

A Model of Organizational Performance Factors This model (see Figure 1) represents a synthesis of the existing research cited above, integrating elements from existing models. Some items (e.g., leadership, organization structure, and human resources processes), which in the literature include multiple dimensions or factors, are represented here by broader indices so that the survey could be comprehensive. The model uses a systems framework (Martin & Kettner, 2009) which begins with inputs, including client characteristics, staff characteristics (e.g., degrees, experience, motivational profiles, commitment, morale), leadership (Packard, 2009), management competencies (Menefee, 2009), resources (e.g., funding, facilities), governance boards, and community context and conditions (e.g., Glisson & Hemmelgarn, 1998). Throughputs can be grouped by management and program capacity (Sowa et al., 2004). Outputs are assessed at the program, management, and environment levels. Because this study intends to look at staff perceptions of organizational factors at a broad level, all the possible and empirically supported causal links among specific variables are not detailed in this model. It does, however, provide a framework to focus future research. A notable addition of this model is the detailed inclusion of inputs, including client and staff characteristics, community conditions, and resource availability, which can affect organizational performance outcomes of HSOs, the specific focus of this model.

976

Nonprofit and Voluntary Sector Quarterly 39(6)

Throughputs Inputs Community context & conditions (environment): poverty, available resources and support systems, funding, laws & regulations Clients: demographics, competencies (e.g., intellectual, emotional), resources (e.g., support systems, time, transportation)

Staff: demographics, degrees, experience, motivational profiles, commitment, morale, willingness to adopt best practices, locus of control, beliefs regarding program and self-efficacy, etc. Administrators: leadership practices, management competencies Resources: salaries, facilities, equipment, etc. Board/Board effectiveness

Program capacity: service delivery technology & processes: clear standards & procedures; program integrity/logic models followed; extent of program implementation including financial and personnel resources allocated; model fidelity including service dosage/mix, use of best practices, evidence-based practices; licensing

Management capacity: Management processes: mission statement, strategic plans, HRM systems, financial management, management information systems, evaluation, policies & procedures, etc. Structure: reporting relationships, formalization, communication, decision making, control, etc. Climate and culture: norms, values, outcome orientation, teamwork, support, etc. Quality of working life: worker autonomy, worker involvement in decision making, workload, working conditions, diversity issues, etc.

Outputs Program outcomes: service outputs, effectiveness/client outcomes (goal attainment), quality, stakeholder & client satisfaction, cost effectiveness, unit cost

Environmental relations: satisfying strategic constituencies, environmental adaptation Management outcomes: financial health, employee satisfaction/quality of working life, commitment

Figure 1. Organizational performance: A logic model

Sowa et al. (2004) present their model for use with all nonprofit organizations. This broad category includes not only HSOs but also organizations ranging from hospitals to arts and cultural organizations. Inputs, especially client and staff characteristics and resources provided, are especially important in HSOs, which often serve particularly disadvantaged and often unwilling clients. The model of Yoo et al. (2007) is designed for HSOs and focuses on “organizational constructs,” including staff characteristics but not client inputs. Therefore, for the HSO sector, the inclusion of inputs in this model adds an essential factor relevant to organizational effectiveness.

Packard

977

Throughput elements of this model are drawn from several sources: Sowa et al. (2004), Yoo et al. (2007), the management audit format of Lewis et al. (2007), and Herman and Renz (1998). Each of these addresses many, but not all, of the variables presented here. Management capacity includes mission, strategic planning, goals and objectives, human resources systems, financial management systems, management information systems, organizational structure (e.g., centralization–decentralization, formalization, coordination, control, worker autonomy, etc.; Schmid, 2002) and organizational climate and culture (Hemmelgarn et al., 2006), the latter factor including, for example, innovation (Jaskyte & Dressler, 2005), outcome orientation, and support (including supervisory support). Program capacity includes service logic models and model fidelity (Mowbray, Holter, Teague, & Bybee, 2003), service dosage/mix, and level of financial and personnel resources (Sowa et al., 2004). At the service delivery level, the nature of the services provided and service intensity or dosage (Berry & Cash, 2002; Fein, 2002) are particularly important. Service delivery technology, as noted by Yoo et al. (2007), is complex in HSOs and is affected by individual and management factors identified above. Outputs, or more broadly, end results, include outputs (measured as units of service or service completions), efficiency, quality, and outcomes (Martin & Kettner, 2009). Performance is used here as the overriding concept, including throughputs in terms of program and management operations; and results including outputs, quality, efficiency, and effectiveness. Performance measurement is complicated because it is multidimensional and socially constructed (Herman & Renz, 1997), meaning different things to different stakeholders. At the broadest level, the concern is for the performance of a service delivery system, reflected in overall community well-being and addressing public policy goals (e.g., family reunification in child welfare). This is typically done through public agencies which often contract with not-for-profit and for-profit agencies and is affected by larger factors such as community economic conditions. At the agency level, an organization’s environmental adaptation (growth and survival) can be addressed through a strategic constituencies approach, including stakeholder relationships (Balser & McClusky, 2005) and accountability (Ospina, Diaz, & O’Sullivan, 2002). Political considerations in measurement also need to be addressed (Stone & CutcherGershefeld, 2001). Management outcomes can be reflected in financial health and employee satisfaction (Sowa et al., 2004). Employee satisfaction is usually not seen as an end result variable but can be an intervening variable affecting service outputs or outcomes. Program outcomes include changes in client quality of life in areas such as psychological status (e.g., satisfaction, depression), knowledge, behavior, or status (e.g., employed in a living wage job; Poertner & Rapp, 2007). Client satisfaction can also be considered but is typically not connected to service effectiveness. Yoo, Brooks, and Patti (2007) assert that service effectiveness, or the achievement of client outcomes, should be the

978

Nonprofit and Voluntary Sector Quarterly 39(6)

key measure of performance in HSOs. Yoo (2002) has added that “overall, it is critical to take the leap into studying organizational characteristics in relation to client outcomes” (p. 59; italics in original). A contingency approach operates to the extent that individual organizations will identify their most important expected outcomes, but the primary criterion identified by Patti (1987)—service effectiveness—is even more prominent today as funders expect client change such as obtaining employment. In summary, the literature (including Patti, 2009, especially pp. 13-14) suggests that the most important factors are client or community change (outcomes), reflected at the program level through numeric counts of quality-of-life changes, standardized measures, or level-of-functioning scales (Martin & Kettner, 2009). Finally, a feedback loop in the model suggests that members of an organization can examine outcomes—positive and negative—to consider changes that need to be made in inputs and throughputs to improve performance and adapt to new conditions. This study intends to add knowledge in this area by looking at staff perceptions regarding the importance of a range of these organizational characteristics affecting performance, and also perceptions about the best measures of performance in selected HSOs, suggesting opportunities for improving performance and for further research.

Setting and Method The San Diego Workforce Partnership in San Diego County administers funds from the Federal Department of Labor Workforce Investment Act. Annually, approximately $3,600,000 (dependent on annual funding) are distributed to contract agencies. Contract agencies include not-for-profit community-based organizations, the juvenile court, and community schools. These programs serve approximately 1,000 clients per year. Services provided include work readiness training, work and occupational skills training preparation, subsidized employment, educational services, and leadership development services. Staff skills and qualifications range from paraprofessionals to master’s level. Clients range in age from 15 to 21. Clients have some combination of barriers including (but not limited to) poverty; mental or physical disabilities; limited English proficiency; and being court-involved/youthful offenders, gang-involved youth, current and former foster care youth, runaway/homeless youth, and/or youth of incarcerated parent(s) or incarcerated sibling(s). Budgets for individual programs range from $150,000 to $850,000. These programs are overseen by eight program specialists employed by the Workforce Partnership. Data were gathered through questionnaires administered on line to staff and managers of fourteen programs providing workforce and educational development services for high-risk youth and to the funding source’s contract monitors who oversee these programs. Based on previous research and the literature on organization effectiveness in HSOs, 31 factors were chosen as relevant to organizational performance. Each element is based on a variable in the model identified above. To get the survey to a manageable length for respondents, factors were of necessity compressed into broad categories. For example, a commonly used instrument to measure leadership,

Packard

979

the Multifactor Leadership Questionnaire (Bass & Avolio, 1990), includes 45 questions; and the Organizational Culture Profile (O’Reilly, Chatman, & Caldwell, 1991) uses 54 questions to measure eight dimensions of culture, including innovation, attention to detail, outcome orientation, aggressiveness, supportiveness, emphasis on rewards, team orientation, and decisiveness. Some items here were adapted from a management audit designed for HSOs (Packard, 2000). That audit (Lewis, Packard, & Lewis, 2007) contains 87 questions, and both for the sake of brevity and the fact that the audit included areas (e.g., risk management) with no theoretical or empirical relationship to organizational performance, not all items were used. Since this study is intended to get views of employees from a broad overall perspective, this questionnaire addressed many factors, each represented by one item. Some of these are indices created from larger numbers of items. Questions for each specific item would have made the questionnaire too long. For example, one item in this survey (“the organization’s structure facilitates cross-functional communication, effective decision making, collaboration, teamwork, and support”) represents with one global measure of “structure” a cluster of factors in the organizational structure literature, including communication, decision making, and the like. Future research which focuses specifically on a limited number of variables will use instruments with more detail. The questionnaire had three sections. Two sections used the 31 factors discussed above. One section enabled respondents to rate these factors in terms of their importance in affecting organizational performance. Another section rated respondent perceptions regarding the degree to which each of these factors is present in their organizations. In this study, these factors are separate constructs. The second set of questions, asking respondents to rate how they see these factors in their organizations, is common in organizational surveys, often framed as employee attitude surveys regarding their work environment. Several examples (Glisson & Hemmelgarn, 1998; Sowa et al., 2004; Yoo & Brooks, 2005) were mentioned in the literature review above. However, asking employees how important these factors are is a new area of study. Knowing how employees see the importance of particular factors can guide organizational change leaders who want to motivate staff to help improve aspects of organizational functioning. Only importance results will be presented here. Regarding importance, respondents were asked to rate each item in terms of its importance in achieving desired outcomes on a 4-point Likert-type scale ranging from not at all important to essential or extremely important. Three items addressed community and client inputs: the community context, client characteristics, and laws and regulations affecting the program. The Cronbach’s alpha for these was .306, suggesting that these items measure different factors, which indeed they do. The same is true for the 5 items rating organizational inputs such as adequate funding, staff training and motivation, facilities, and the agency board (Cronbach’s a = .487) and program capacity (two questions concerning service delivery methods and adequately trained staff, with an alpha of .435). Fifteen items addressed

980

Nonprofit and Voluntary Sector Quarterly 39(6)

management capacity, with factors listed above, including planning, human resources, financial, and information systems. The Cronbach’s alpha for these items was .878, suggesting that they tend to be measuring the same property. The third section of the questionnaire used both structured and open-ended questions which asked respondents to indicate the most appropriate ways to measure performance. Based on the literature above on measurement of organization effectiveness, respondents were asked how good various measures were, on a scale ranging from Excellent to Not relevant. Each measure was rated with one question. The instrument was pilot tested by three program managers, and minor wording changes were made to clarify several questions. Frequencies, means, standard deviations, and results of correlation analyses between selected variables will be reported here. This instrument has not been assessed for validity. Staff of the 14 programs were sent an e-mail message from their program director explaining the survey. “Staff” included employees of the programs which had contracts with the Workforce Partnership, mainly community-based, not-for-profit organizations. Also included were employees of the State Employment Development Department (EDD) office (traditionally known as the unemployment office) who are outstationed at some of the community agencies and program specialists who serve as contract monitors. These messages were followed by an e-mail message from the researcher which further explained the research and asked staff to click on a URL to complete the online survey (Ritter & Sue, 2007). Approximately 2 weeks after the initial contacts, the researcher sent a follow-up e-mail message thanking those who responded and asking others to fill out the survey. Surveys were sent to 124 e-mail addresses. Fifty two responses were received, for a 41% response rate. Many of the smaller programs had response rates of 80% to 100%. The overall rate was lower due to low responses from the larger programs (two had 24 staff each), which had high numbers of EDD staff. Two programs had no responses, one because the program manager did not forward messages in time. Several problems with e-mail, including agency blocking programs and bad e-mail addresses, also affected the response rate. Excluding the two largest programs, with 8 responses of 51 possible and the one which had no responses, would have resulted in a response rate of 79%. Respondents indicated on their surveys these staff positions: 18 line workers, 5 supervisors, 16 managers, 6 program specialists, and 7 other, which included administrative staff such as business services coordinators.

Results Views on Measures of Organization Effectiveness Results of the section of the questionnaire which measured views on the usefulness of various measures of performance (means and standard deviations) are indicated in Table 1. Reliability of the scale was high, with a Cronbach’s alpha of .917, suggesting that these may comprise an overall perception of a global performance measure.

981

Packard Table 1. Ratings of the Quality of Measures of Effectiveness: Overall and by Job Level Factor Accomplishment of goals and objectives Satisfaction of outside stakeholders such as funding agenciesa Client satisfaction Cost effectiveness or unit cost The agency’s ability to adapt to changes in the community Financial health of the agency: having adequate funds Employee job satisfaction

Overall M (SD)

Manager M (SD)

Supervisor M (SD)

Line Worker M (SD)

4.16 (1.00)

4.19 (.98)

4.20 (1.30)

4.29 (.77)

3.58 (1.10)

4.00 (.82)

3.80 (1.30)

3.47 (1.07)

4.64 (.60) 3.64 (.92) 4.20 (.67)

4.50 (.63) 3.50 (.97) 4.13 (.62)

5.00 (.00) 4.20 (.84) 4.60 (.55)

4.59 (.71) 3.71 (.85) 4.18 (.73)

4.12 (.77)

4.19 (.75)

4.20 (.45)

4.12 (.86)

4.28 (.78)

4.31 (.48)

4.00 (.71)

4.12 (1.10)

Note: Mean scores: 5 = excellent, 1 = not relevant. a. Statistically significant (