How AI-based HR Chatbots are Simplifying Pre-screening

Introduction to Pre-Screening
Once organizations have sourced the prospects, next in line comes pre-screening which happens right before scheduling interviews and further evaluating the candidate. Pre-screening entails shortlisting candidates from a pool of resumes, in accordance with the role at hand. It also determines if the applicant has the qualifications needed to do the job for which the company is hiring.

The result of pre-screening decides if the candidate moves to the next round of the interview process. No matter how senior the position, using a pre-screening process gives employers an advantage and prepare them for the interviews ahead. Pre-screening is an important information-gathering tool. When done well, it can help flesh out what’s not on a resume.

Pre-screening also lets the job applicant understand the requirements of the position at hand. Sometimes, it’s the job applicant who decides in a pre-screening interview that he’s not interested in a particular position. It provides an easy access to candidates to know the compensation package, benefits, nature of work or any additional information about the organization.


Hence, this step in the recruitment process is very crucial. It can save a lot of time, filter the right candidates and get quality candidates on board.

The screening interview has a list of repetitive questions which the job applicants need to answer. It is aimed at figuring out whether the candidates have the required skillset, can relocate, know their salary expectations and answer any questions they might have. Well executed pre-screening interviews effectively weed out candidates who are not the right fit before you bring them in for a more rigorous in-person interview.

Manual pre-screening is gradually becoming a talk of the past as automation continues to transform recruitment space. HR bots are simplifying one of the most time-consuming aspects of talent acquisition and increasingly incorporating analytics to make predictions about candidate fit and quality. We are entering a new evolution in this industry that is primarily driven by the use of AI to take the load off individual recruiters.

A well thought out pre-screening process enables you to:


Challenges Faced by HRs In Pre-Screening
Pre-screening has become a time-consuming procedure for recruiters these days. The traditional recruitment process of executing the whole process has become outdated, keeping in mind the significant rise in job seekers and demand for high potential talent. This is so because, such a practice not only consumes the HR’s time in doing repetitive tasks that do not contribute towards hiring process improvement, but also distracts them from their most important tasks like- ensuring identification, selection, and onboarding quality talent.

Listed below are some of the major challenges faced by HRs in pre-screening round.


Managing High Volumes
Organizations hire year long. However, having quality talent pool especially for bulk hiring becomes problematic. HRs in mid to large-size organizations do mass hiring which requires a lot of effort and time to manage thousands of applicants. Even before scheduling telephonic or in-person interviews, HRs need to gather information which may get missed out on resumes. This pre-screening process becomes difficult to manage as the large number of candidates who apply, either leave mid-way or back-out at the last moment.

The time required for pre-screening is an add on to scheduling and in-person interviews. In addition to managing multiple candidates simultaneously, HRs have their day-to-day KRAs and KPIs to fulfil. However, to gather necessary and repetitive information from the job applicants, HRs have to put in extra time to schedule and initiate conversations. This time can be used in fulfilling other important tasks.

Engaging the Right Candidates
At times, there are discrepancies in the job descriptions that companies roll out. This ends up attracting the wrong set of candidates whose skillset does not meet the job at hand. So, going through the hassle of asking the similar set of questions and not getting relevant responses is a pain for HRs when they realize that the candidates are not competent.


Manual pre-screening not only comes out as a hassle for HRs, but also becomes challenging for candidates switching or applying for jobs. From the time of going through a job description and applying for a job, to getting shortlisted and waiting patiently for a pre-screening interview, candidates have a lot on their mind which remains unaddressed. Listed below are some of the challenges faced by job applicants:

Unavailability of HRs
The working hours of HRs are 9 to 5, which is also the time they spent in reaching out to applicants. However, the applicants may not always be available to take calls or be available during office hours. They find time to connect with the recruiters after they have clocked out. This gives rise to a mismatch in time and lack of coordination between HRs and applicants. The unavailability of either only lengthens the process.

Lack of Engagement
Anyone looking for a job has a number of things in his mind regarding the new organizations, its work culture, compensation structure, job responsibilities and more. But often what happens is that HRs fail to give adequate time to the applicants in lieu of their day-to-day tasks. Their queries remain unanswered, which is why they end up losing interest and leaving the application mid-way.

Poor Candidate Experience
Generally, HRs have candidate resume before making the pre-screening call. However, due to lack of time, HRs sometimes are unable to go through the resumes in detail and end up asking the information already mentioned. If there is a lack of detail on the company’s website regarding the job role, candidates wish to know in detail. If their resume does not match to the role, they have wasted their time and effort, leading to a poor candidate experience.

Since both HRs and job applicants face multiple issues before hiring, there is a dire need of a system that can be used effectively used to carry out pre-screening with little to no dependence on the HR, a system that substantially improves execution and candidate experience. One way to ensure continuous improvement and non-stagnation of hiring process efficiency is seeking help of Artificial Intelligence (AI). Even candidates are aware that the recruiting process might not be human-to-human at every step, but, they look for a change to receive information from whichever source possible.

Randstad found 82% of job seekers believe the ideal recruiter interaction is a mix between innovative technology and personal, human interaction.
Seeing a rise in the interest of job seekers as well, hiring departments are increasingly moving towards a hassle-free, time efficient and more engaging source in form of automation which not only solves the HR woes but also enhances candidate experience.

Adoption of Technology
Adoption_of_TechnologyScreening resumes and shortlisting candidates to interview is estimated to take 23 hours of a recruiter’s time for a single hire.
Traditionally pre-screening took up a lot of time as recruiters had to sift through hundreds of CVs to shortlist candidates for the following stages. Not only was this exercise extremely time-intensive, but it also didn’t guarantee a thorough and scientific search. However, digital transformation has brought with itself faster means of prescreening candidates with a click of a few buttons. With many tools using powerful forces of AI & ML, the time taken to gather and sort through hundreds of CVs has decreased immensely, ensuring that tech recruiters and decision makers get more time to engage with candidates.

Technology adoption is significantly higher for large companies, owing to their scale and need to automate some of the manual tasks. This gap in technology adoption is due to varying budgets and resources.

Fortunately, technology is evolving at a rapid pace and there are a number of powerful tools that recruiters can use to streamline and speed up the process.The game-changer has been the evolution of AI-based chatbots.

Emergence of Chatbots
RITA-06Chatbots are essentially the evolution of automation. They not only perform repetitive manual tasks efficiently but also have the potential to transform the entire recruitment process by removing bias, thereby enhancing decision-making capabilities of the HR.

Sharing how recruitment chatbots are revolutionizing the tedious process of pre-screening, Teachie’s Recruitment Marketing Automater Adam Chambers says, “The recruitment chatbot ensures they’re qualified conversationally: that means no unqualified applicants can apply.” Using AI-based chatbot helps HR professionals focus on high-end tasks. This implies that interview calls would now be intended to learn about the prospect’s personality rather than just their college or salary expectations.

According to an upcoming HubSpot research report, of the 71% of people willing to use messaging apps to get customer assistance, many do it because they want their problem solved, fast.

Job applicants prefer self-service. No longer are they prepared to wait weeks, days, hours or even minutes for an employer to help them. They need their questions answered at the earliest.

RITA-15“With textual AI-based chatbots and automated scheduling, pre-screening is becoming a common solution in the recruitment automation process. If a candidate passes the pre-screening, the chatbot can automate the next steps in the candidate journey, such as automatically schedule the candidate for a recruiter follow, or interview. This automated process helps save recruiters and hiring managers hours each week. In addition, it speeds the recruiting process, providing job seekers with great candidate experience. Job seekers who are qualified are automatically moved to the next step. There is no waiting for a human to move the candidate to the next step.”


How Chatbots Solve Pre-screening Challenges
Chatbots help in screening candidates faster due to their human-like conversational experience. Chatbots have gone the extra mile to solve not only the issues faced by candidates but HRs too. The developers have used AI in designing bots in a way that they can replicate an HRs job, thus saving a lot of time which HRs can utilize in completing their day-to-day work. Similarly, it works as a boon for candidates who no longer have to wait for phone calls to get any job-related updates. This is how chatbots have made lives easier:

1. Easy Configuration

Chatbots have an existing knowledge database of questions which get updated as per organizational requirements. At this stage, questions are easily configured to collect the required data from job applicants. HR departments can enter the information about the role and other details that job applicants may find useful.

2. Participant Invitation


AI-based chatbots enable HRs to invite candidates individually or also provides the functionality to handle bulk candidate data. Additionally, bots allow you to customize the e-mails as per different job roles and candidates. Chatbots are evolving in a manner that they can easily handle bulk information, thus reducing a lot of manual hassle.

3. Data Collection

Chatbots offer a great candidate experience due to their ease of use. Since they are mobile-friendly, candidates can talk to the bot on-the-go. The two-way conversation allows candidates to resolve their queries too. Responses are auto-saved which means applicants can pause the conversation and resume from where they left.

4. Candidate Screening

Data gets updated on the platform in real-time as the conversation continues. HRs can access this data on a neat dashboard where the date and time of messages and the transcript of the conversation is available. HRs can now filter candidates based on their responses.

Measuring the Effectiveness of Bots
Investing in AI & ML improves the quality of applicants as well as shortlisted candidates, optimizes the recruitment process for time and cost, and ensures a better experience for candidates. It also helps in combating unconscious bias and promotes inclusivity and diversity. An intelligent chatbot is the answer to the complexities involved in pre-screening process.

It solves your time versus quality dilemma by sorting the good from the disqualified resumes in a fast and efficient manner, thereby significantly reducing the time spent by tech recruiters in going through every resume.

Time-to-hire is a critical metric to measure the success of your recruitment, and the use of AI in pre-screening significantly expedites the shortlisting process and reduces the time-to-hire. AI helps recruiters analyze big data and put logical filters to target relevant candidates within a few clicks. It gives tech recruiters the time to engage with candidates, that was earlier devoted to manual sorting of resumes. The use of ATS and AI tools decreases the time to- hire, thus resulting in a 69% success rate.


RITA-15“When I was a full-time senior recruiter, searching for keywords in the resumes were vital – before even reading through the actual resumes…What I would do is create pre-qualifying questions that often allowed the AI in the system to auto-reject those who didn’t pass the questionnaire. Using this system, I would often be able to identify ‘perfect’ candidates…Too often, recruiters don’t know how to set it up correctly to be able to reduce their manual labor of reading all those resumes.

-D Bowler Consulting

Benefits of Automation in HR Processes
RITA-15With text-based chatbots and automated scheduling, pre-screening is becoming a common solution in the recruitment automation process. If a candidate passes the pre-screening, the chatbot can automate the next steps in the candidate journey, such as automatically schedule the candidate for a recruiter to follow, or interview. This automated process helps save recruiters and hiring managers hours each week. In addition, it speeds the recruiting process, providing job seekers with great candidate experience. Job seekers who are qualified are automatically moved to the next step without human intervention.

-Jonathan Duarte,


The Road to Digital Transformation: Expert Opinion
RITA-09This cutting-edge technology is taking businesses by storm. The intuitive messaging app interface makes it easier for applicants to communicate with the bot. AI smartly ask questions and take down all details required by HRs for a telephonic or in-person interview.

AI & ML will continue to evolve over the next few years and its impact on corporates will be humongous considering the increased usage across industries. Experts across the world are in unison when it comes to vouching for the effectiveness and industry adoption of chatbots in pre-screening and other areas.

According to Christi Olson, Head of Evangelism for Search at Bing, chatbots of the future don’t just respond to questions, they talk and think. They draw insights from knowledge graphs and forge emotional relationships with customers.

Bryq Co-founder & CEO Markellos Diorinos feels that HRs are unable to keep up with today’s non-linear career development which is why pre-screening bots are becoming an indispensable tool for both HR departments and hiring managers in locating the talent they require. Diorinos believes that combining big data methods with proven scientific methods will be the next wave before AI algorithms are mature enough to offer viable solutions.

After months of learning Artificial Intelligence modeling tools and how to leverage cloud-based cognitive services, Rattlehub Digital Head of Technology Innovation and Security James Melvin is absolutely convinced that Conversation as a Service (CaaS) is the future.

Digital transformation is changing HR processes in a manner that they are becoming more streamlined, time-efficient engaging and productive.


We might still be in the early stages of bot revolution. Sharing his views on what future holds for pre-screening, TSC General Manager Americas Gordon White says, “We are still in the early days of chatbots but there are some interesting things to watch. Voice, as a share of user input will proliferate. Data security will continue to grow in importance. Eventually, we’ll see consolidation of platforms and mainlining of robots into even larger platforms to become core functionality.”

Emphasizing on the importance of updating resumes, Graffersid Founder & CEO Sidharth Jain says that it is not possible to put all projects on CVs. If someone does not possess good writing skills, it does not imply that they lack domain knowledge. This is where AI will play an important role.

“The world is moving towards a more automated approach where bots are able to analyze video resumes, gather the missing information and filter resumes to come up with the most suitable candidate, he adds.

AI innovation is transforming how HR managers view, select, and operate candidate screening software. The benefits of this are manifold; recruiters don’t have to sift through crowded job markets or endless candidate lists. Applicants are automatically shortlisted and hiring teams need only allocate strategic efforts to aid Level 2 screening (with up to 83% more accuracy, reports Vervoe). This helps create a more equitable hiring process while still determining which candidates are the best fit.


source :

What Is Augmented Analytics in HR? Definition, Use Cases, and Key Benefits

With global data volumes rising rapidly, organizations are now poised to extract hidden insights about their workforce and generate value. However, the continued reliance on data scientists is a big challenge for most companies. Not all companies (especially small and medium-sized businesses) have the resources and skill set to realize the full potential of data.

Human Capital Management (HCM) Suites for High-growth Enterprises: The Ultimate Guide [Buyers Guide]
Your HCM System controls the trinity of talent acquisition, management and optimization – and ultimately, multiple mission-critical performance outcomes. Choosing the right solution for your organization….

That’s where augmented analytics comes in. It democratizes data-to-insight conversion, allowing virtually any stakeholder to access insights in a comprehensible format. This has significant implications for HR – let’s look at them in greater detail.

Learn More: 5 Ways Predictive Analytics will Transform HR

What Is Augmented Analytics in HR?
Augmented analytics can be defined as a branch of data science that aims to automate the insight generation process. It achieves this by using cognitive technologies, enabling machines to view and represent data from the same perspective as humans. The following components power augmented analytics:

Machine learning (ML)

ML lets technology systems intuitively learn, without the intervention of human coders. They can automatically adapt to different scenarios independent of rule-based programming. Powered by ML, augmented HR analytics can get incrementally better at providing the right insights with every data processing cycle.

Natural language processing (NLP)

NLP is particularly relevant to HR. Thanks to NLP, HR professionals do not need years of data science experience. Instead, the augmented HR analytics interface can deliver insights in a human-readable format. This is a four-step process:

Data in natural language such as English is processed via NLP and converted into a machine-readable format.
The machine data is then fed into analytics models to detect patterns, trends, and anomalies.
A predictive engine performs a root cause analysis to identify the most probable factors causing the trend.
The insight is finally converted back into a natural language, to be actioned by any lay user.

Insight Automation

As per traditional analytics models, data scientists spend 19% of their time on collecting data, and another 60% on cleaning and organizing it. These processes can be completely taken over by the insight automation capability of augmented analytics. Instead, data scientists (or, augmented analytics solution providers) can focus on building more effective training sets and refining analytics algorithms. HR only needs to feed the correct data into the interface and access the most relevant insights.

Together, these elements make HR analytics much easier to use. Let us now look at the different applications of augmented analytics in HR and its key benefits.

HR Augmented Analytics Use Cases and Three Benefits You Can Expect
Interestingly, augmented analytics isn’t limited to a single HR function or area. Much like the internet, data and analytics have the potential to transform processes across all facets of the enterprise completely.

A promising use case to consider is aligning hiring efficiency to employee quality. In a competitive labor market, HR risks compromising quality due to a disproportionate focus on quantity. There are deadlines to meet, time-to-hire goals to achieve, and recruitment campaigns to be kept under budget. In the middle of these KRAs, quality of hire can often be undermined. HR augmented analytics lets you feed recruitment data into the software and assess where you stand on the quality spectrum.

Another use case for HR augmented analytics is controlling voluntary attrition. Attrition in any enterprise is a complex issue. Not all of it is voluntary, and not every case of voluntary attrition is regrettable. Augmented analytics lets you deep dive into all of these characteristics, sifting through employee lifecycle information and pinpointing the cause and nature of attrition. The resulting insights can help you refine the employee management mechanism for optimal attrition rates, targeted towards your most high-performing and ROI-friendly employees.

These uses the only scratch the surface. You can apply augmented analytics to virtually any HR use case, such as employee engagement, onboarding pain points, or benefits administration. And this will help you achieve several advantages over traditional analytics:

1. Incorporating the power of AI into analytics systems

Enterprises are now eager to adopt AI technology but aren’t always sure of the right utilization. Augmented analytics takes a solution-oriented approach to AI adoption, identifying a measurable HR challenge and using data to solve it. As a result, you can obtain tangible returns from your investment in AI.

2. Dramatically reducing your time to value

Augmented analytics has a clear leg up on manually driven analytics systems. Data scientists can take weeks or even months to collect and cleanse the data, not to mention the time spent on building analytics models. Augmented HR analytics automates the first half of this activity, significantly accelerating insight generation. It is even faster than regular analytics because you don’t have to spend time converting these insights into action points. The predictive capabilities of augmented analytics will indicate a clear course of action.

3. Democratizing business intelligence and reducing cost

Augmented analytics opens up advanced business intelligence to the broadest group of stakeholders. From your IT team to the payroll division, from C-level leaders to third-party employee engagement consultants, virtually anyone can leverage augmented HR analytics to improve processes. This, in turn, eliminates the need to hire experienced data scientists, dramatically reducing the cost component of your analytics function.

The Future is Now: Strides in HR Augmented Analytics
Augmented analytics is an exciting area of innovation and research. Its global market is expected to cross $22 billion in the next six years, growing at a staggering pace of 25.2% every year. In fact, augmented analytics was named among our top five disruptive HR technology trends to track in 2019.

So, is this only a futuristic technology currently being hyped? The answer is a resounding no.

Technology giants across the world are already looking at implementing augmented HR analytics as part of their larger business intelligence services. One year ago, HR tech giant Workday announced a new version of people analytics that would leverage AI and ML for automated insight generation. Solutions like this help to reduce information load by a factor of 1000 or more, significantly reducing data clutter so that you can view only the most critical insights.

As HR looks at propelling a brand new growth curve for companies, fueled by fast and effective decision-making, augmented analytics will be indispensable to the HR toolkit of the future.

Source :

What do people mean when they call HR soft and fluffy?

We already have our seat at the table. We’ve forged an unbreakable chain between employee, service, and profit. We ditched analysis for analytics and correlational for predictive. Our soft got harder and our fluffy more edgy. Less pie in the sky, more ROI.

So why hasn’t all this been enough?

A good starting point is to work out what people mean when they call us soft and fluffy. What do you think it actually means? Ever given it much thought? Me neither, before now. It seems that calling HR soft and fluffy is, paradoxically, a pretty soft and fluffy thing to do. It’s a vague accusation that has multiple and distinct meanings.

Soft and fluffy is deployed to suggest that HR is not sufficiently business-focused – yet I’ve heard exactly the same criticism levelled at almost every function and at every level. So it can’t mostly still be about that. An obvious way of dealing with this is always to start with specific and real business problems or opportunities rather than with practices or techniques. By focusing too much on our practices and techniques all people can see of HR is what it does rather than why it does it.

Soft and fluffy is also used to describe HR people rather than the function as a whole. You know the stereotype: we’re more concerned with being liked than the success of the business. If anyone in any role in any function is focused more on being liked than getting stuff done then this will be counter-productive. At the same time being nasty doesn’t mean you’re helping the business. This particular image problem for HR, if that’s what it is, is part of a broader view of business success as necessarily requiring macho, ass-kicking, take-no-prisoners ruthlessness.

A third meaning of soft and fluffy is to refer to the human stuff we deal with in HR. This is a challenge I’m not sure historically we’ve dealt with too well. I’ve never understood why people think that human thoughts, feelings and behaviours are so intangible and mysterious they cannot be understood, so there’s almost no point in even trying. This leads to the even stranger idea that data comes in two varieties: hard and soft. Anything to do with people is soft even if measured in a reliable and valid way. Anything to do with numbers is hard even if measured in the dodgiest way imaginable.

HR data (with the possible exception of things like absence) is typically regarded as intrinsically soft and hence not to be taken seriously. Whereas other data, such as that used in finance, is regarded as hard and hence meaningful. I think we are partly to blame for perpetuating the idea that what we deal with is soft, because we are nowhere close to being sufficiently concerned with the reliability of the data we collect and the measures we use. This is something we could do much better.

The last meaning is somewhat related to the third. We hit candy-floss levels of soft and fluffy whenever it appears that we don’t know what we’re talking about. I don’t think other organisational functions are as relaxed as HR about what things are called and what they mean. When push comes to shove, and it often does, we get frustrated if people try to pin us down and ask what the terms we throw about actually mean. And because we don’t really know we say it doesn’t matter, and anyway we all know what it means. Right?

The single most important move we can make to rebuff the fluff once and for all is to take seriously the meanings and definitions of the terms we use as a profession. So remember that even if it’s getting you out of a hole right now, it’s likely harming the longer-term reputation of HR. If we go around telling people that it really doesn’t matter what we call the things we do, this sounds awfully close to saying the things we actually do don’t matter either

Source :

How can we create an economics of hope?

While the economy has recovered from the Great Recession by some measures, many households are falling farther behind. A sense of despair for many Americans has cleaved the country and shaped elections and public health. Andrea Levere ’83, president of the nonprofit Prosperity Now (formerly CFED), discusses the policies and programs that can help more people find opportunities for hope.

There are a discouragingly large number of reasons to see a bleak economic future for many Americans, including rising inequality, low social mobility, and shockingly pervasive financial insecurity. Princeton economists Anne Case and Angus Deaton have documented “deaths of despair” tied to deterioration in economic and social wellbeing that creates a “cumulative disadvantage” that is all but insurmountable.
A recent study found that men’s lifetime earnings have been falling since 1942. Women entering the workforce meant that overall household incomes increased until 1999, but they have been falling since. Another paper found that American children’s chances of earning more than their parents have been shrinking for 50 years. Where 90% of children born in 1940 earned more than their parents, only 50% of the generation entering the labor market today will earn more. The middle class has been particularly hard hit by that trend.

Carol Graham, a Brookings Institution researcher, told the Washington Post that hopelessness can be passed down from generation to generation. She is looking for lessons in the lives of those who demonstrate resilience. She asked, “Why do some people maintain hope, when they’re not advantaged in terms of education, skills, or jobs?”

According to Andrea Levere ’83, president of the nonprofit Prosperity Now (formerly CFED), one important factor in maintaining hope is the ability to build assets. She talked to Yale Insights about the daunting scope of the problem and the policies and programs that can make a difference.

Q. By many measures we’ve seen a significant recovery since the Great Recession. What does the economy look like to low-income families?
Fourteen percent of American households meet the formal definition of poverty, but many more households are really struggling to make ends meet. When Prosperity Now looked at financial insecurity more broadly, we found 44% of households are in liquid asset poverty. That is, they don’t have the savings to cover basic expenses for three months if their main source of income is disrupted—they lose a job, get sick, or face some other financial shock. Similarly, when the Federal Reserve asked households if they could cover an unexpected $400 expense without selling something or borrowing, 46% percent of Americans said they could not. Focusing on assets changes the entire conversation. Instead of “those poor people,” it’s nearly half of us.

Q. What led us to this?
There are a range of reasons, but Jacob Hacker, a Yale professor, has written about how we have shifted more and more risk to individuals and households. We have to manage all sorts of risks that used to have institutional supports. Pensions have all but disappeared, leaving many with no retirement fund automatically set aside other than Social Security, even as steady income is turning into volatile income as long-term employment gives way to the gig economy.

For so many people, income is a safety net. We’d like it to be a ladder to economic opportunity. We need to figure out how to help each person, wherever they are starting on the income continuum, build the assets that it takes to attain financial strength, stability, and opportunity.

A decade ago, we might have thought that a class in financial education would solve all our problems, and while education is key, we now know that it’s insufficient. People need to combine knowledge and actual experience in opening bank accounts, in saving over time. And we need to ensure that the financial structures that families and individuals rely on are safe and affordable. We know these core elements can be put together successfully, but not nearly enough people have access to the knowledge and the structures. Changing that is a huge focus of Prosperity Now’s work.

Q. Which structures are helping low-income households and which are hurting?
If there’s anything we’ve learned from the explosion of predatory lending, it’s that there’s lots of money to be made in low-income communities. Unfortunately, too much of it is undermining the long-term financial security and stability of those communities. About one quarter of American consumers are either completely unbanked or underbanked.

Underbanked means they have some kind of mainstream account, but they still use alternative financial services such as a check casher, payday lender, or rent-to-own store. On average, a financially underserved person spends $2,400 a year on fees and interest.

I spend nothing on fees or interest, yet those with much fewer resources too often pay too much. If we simply solve the question of getting people access to fair and affordable banking, and they save just half of what’s currently going to fees and interest, they’d have over $1,000 in the bank each year.

One of the single most important institutions that has been created to address this is the Consumer Financial Protection Bureau. In just five years, it has returned $12 billion to consumers who were unfairly treated by a whole set of financial institutions. But today that bureau is under serious threat.

Q. How does the tax code shape opportunity and asset building?
In 2015, the tax code provided $660 billion to build the wealth of Americans, of which the overwhelming majority doesn’t just go to the top 1%, but to the top 0.1%. Virtually nothing goes to the bottom 60% of the income scale.

The economic productivity and growth of the nation would benefit enormously by providing more incentives to low-income people to help them save and create a more stable future. Along those lines, Prosperity Now is co-leading with PolicyLink a national coalition called the Tax Alliance for Economic Mobility, which is a group of over 30 organizations, all focused on how we turn the tax code into a tool to address both wealth inequality overall as well as the racial wealth gap.

The earned-income tax credit and child tax credit are the most effective anti-poverty programs in America for working adults. And for many low-income families, tax time is when they see the largest amount of cash all year. From behavioral economics, we know that many tax filers already have spent their refunds “in their heads” before they file their taxes. If we want to encourage low-income taxpayers to consider the opportunity to split the refund so some goes to savings, we need to talk to them earlier and we need to make things as automatic as possible, perhaps even creating an account where the refund will be directly deposited. One basic idea would be to create a tax credit that matches what low-income taxpayers set aside as we have done in legislation recently introduced call the Rainy Day Savings Act.

We’ve learned through years of research, demonstrations, and experimentation that assets are hope in concrete form. We did a national demonstration almost two decades ago with individual development accounts, which are match savings accounts to build long-term assets such as buying a home, investing in education, or starting or expanding a business. One of the most unexpected outcomes from that research was that the lowest-income people saved the highest percentage of their resources. When asked why they did that, they said that was the price of hope.

We have a guiding assumption at Prosperity Now that low-income people have more capacity than opportunity. Our task is to create the on-ramps into the economy to allow them to be productive. In the context of profound changes in technology and an uncertain economy, we need more and better on-ramps.

Q. How does race play into opportunity, financial well-being, and asset building?
We published a report with the Institute on Policy Studies which showed that if black wealth continues to grow at the same rate it is growing today, it would take a black household 228 years to match the average wealth of a white household today. It would take a Latino household 84 years. This level of disparity is unconscionable. It is the result of centuries of racism and discrimination which, in many ways, continue today.

When you look at the enormity of this gap, it’s clearly going to require multiple strategies to address. One of the areas we focus on is housing because it continues to be the largest source of wealth for Americans, and we provide it with some of our deepest subsidies. Black and Latino households lag white households by 30% in the rate of home ownership. Are we using taxes and other incentives to support asset building effectively?

The Institute on Assets and Social Policy at Brandeis University has developed a tool that helps assess whether policies reduce or increase the racial wealth gap. On the face of it, the mortgage-interest deduction should benefit everybody, but people of color don’t have the savings to be able to put a down payment down on a house. White people get access to down-payment savings from their families at a dramatically different rate than people of color because of the wealth gap.

Even if we think we’re doing the right thing, we may be exacerbating the wealth gap. We’re not all starting at the same place, and only by bringing this very specific lens do we understand we may need to very explicitly look at how to help these families build a down payment so they can benefit from the policies that are already helping others. We could support home ownership and help with down payments by capping the mortgage interest deduction, which is one of our largest expenses.

Q. Does education play a role in providing more opportunity?
In a demonstration on children’s savings accounts, we found in our largest experimental site, five Head Start centers outside Detroit, that the majority of the parents had given up on their children going to college by age three. It’s not because they love their children less; they thought they could never afford it.
One of the heartbreaking parts of that is their understanding of the net cost of college for someone in their income level was wrong. They hear what it costs an upper middle class person to go to a school like Yale and think, “I could never afford that. I don’t want to create those expectations in my child.” Part of our work with building financial capability and savings is to help people understand what different educational paths cost for someone in their income bracket. How does a Pell grant change the math? What would it look like to go to a community college or a public university? With better information, there is an option that is a realistic aspiration for their child.

Research has shown a child with a savings account in his or her own name with as little as $500 in it is three times as likely to go to college and four times as likely to graduate as a child without. We also know that currently only one in ten children from a low-income family graduates with a four-year college degree by their mid-20s.

We know this is a situation that must be addressed. In 2015, we launched the Campaign for Every Kid’s Future to make sure that 1.4 million children have a child savings account by 2020, and every child in America by 2025.

We’d like to see national legislation to support making savings accounts available to every child in the United States. Over the last several years, the ability to do major legislation has been at the state and local level. We’re using that as an opportunity to incubate innovations and create lessons for when we can move forward at the national level.

It would have a profound impact on the next generation, but also we’ve learned that parents will do for their children what they won’t do for themselves. We’ve seen in Mississippi, and many other communities, that parents get banked when their children are banked, and, in that way, begin to build their own financial capability.

Source :

Why Asking for Advice Is More Effective Than Asking for Feedback

You just gave a great first pitch to a major client and landed an invitation to pitch to their senior leaders. Now you want a second opinion on your presentation to see if there’s anything you can improve. What do you do?

Conventional wisdom says you should ask your colleagues for feedback. However, research suggests that feedback often has no (or even a negative) impact on our performance. This is because the feedback we receive is often too vague — it fails to highlight what we can improve on or how to improve.

Our latest research suggests a better approach. Across four experiments — including a field experiment conducted in an executive education classroom — we found that people received more effective input when they asked for advice rather than feedback.

In one study, we asked 200 people to offer input on a job application letter for a tutoring position, written by one of their peers. Some people were asked to provide this input in the form of “feedback,” while others were asked to provide “advice.” Those who provided feedback tended to give vague, generally praising comments. For example, one reviewer who was asked to give feedback made the following comment: “This person seems to meet quite a few of the requirements. They have experience with kids, and the proper skills to teach someone else. Overall, they seem like a reasonable applicant.”

However, when asked to give advice on the same application letter, people offered more critical and actionable input. One reviewer noted more specific action items: “I would add in your previous experience tutoring or similar interactions with children. Describe your tutoring style and why you chose it. Add what your ultimate end goal would be for an average 7 year old.”

In fact, compared to those asked to give feedback, those asked to provide “advice” suggested 34% more areas of improvement and 56% more ways to improve.

In another study, we asked 194 full-time employees in the U.S. to describe a colleague’s performance on a recent work task. These tasks ranged from “putting labels on items” to “creating new marketing strategies.” Then, we asked employees to give feedback or advice on the work performance they just described. Once again, those who were asked to provide feedback gave less critical and actionable input (e.g. one wrote, “They gave a very good performance without any complaints related to his work”) than those asked to provide advice (e.g. one wrote, “In the future, I suggest checking in with our executive officers more frequently. During the event, please walk around, and be present to make sure people see you”).

We further replicated these findings in a field experiment using instructor evaluations. In an end-of-course evaluation, we asked 70+ executive education students from around the world to provide either feedback or advice to their instructors. Again, advice more frequently contained detailed explanations of what worked and what didn’t, such as: “I loved the cases. But I would have preferred concentrating more time on learning specific tools that would help improve the negotiation skills of the participants.” Feedback, in contrast, often included generalities, such as “This faculty’s content and style of teaching was very good.”

Why is asking for advice more effective than asking for feedback? As it turns out, feedback is often associated with evaluation. At school, we receive feedback with letter grades. When we enter the workforce, we receive feedback with our performance evaluations. Because of this link between feedback and evaluation, when people are asked to provide feedback, they often focus on judging others’ performance; they think more about how others performed in the past. This makes it harder to imagine someone’s future and possibly better performance. As a result, feedback givers end up providing less critical and actionable input.

In contrast, when asked to provide advice, people focus less on evaluation and more on possible future actions. Whereas the past is unchangeable, the future is full of possibilities. So, if you ask someone for advice, they will be more likely to think forward to future opportunities to improve rather than backwards to the things you have done, which you can no longer change.

To document this effect, we ran another study that was very similar to our first. In this experiment, we again asked hundreds of people to provide feedback or advice on a peer’s job application. But this time, we also asked feedback providers to shift their focus toward “developing the writer.” When removed from an evaluation mindset, by focusing more on developing the recipient, feedback providers were just as critical and actionable in their input as advice providers.

Is asking for feedback always a worse strategy than asking for advice? Not necessarily.

Sometimes soliciting feedback may be more beneficial. People who are novices in their field typically find critical and specific input less motivating — in part because they don’t feel like they have the basic skills necessary to improve. So for novices, it might be better to ask for feedback, rather than advice, to receive less demotivating criticism and more high-level encouragement.


HR News: Machine Learning Can Reduce Turnover

Every HR professional is trying to figure out how to stop turnover. At the same time, they are working on ways to predict which employees are preparing to leave the company and find a way to intervene before it actually happens. In other HR news, Qdoba is facing a $400,000 fine over child labor law allegations, August 23 is Black Women’s Equal Pay Day and some new figures out show how large the gap is and one author believes social learning is the next big corporate learning strategy.

HR News
Machine Learning Can Reduce Turnover
A new study from Harvard Business Review says using machine learning on certain types of data can predict who will leave a company before the exit occurs. The article’s authors, Brooks Holtom and David Allen, say that data, which includes the employee’s past job information and skills-related data, was used to make the prediction. This points to further proof that data-driven decision making can help employers retain those workers who best support the company and can reduce costs associated with turnover. Read more here.

Qdoba fined over $400,000 for child labor law violations
Qdoba restaurants in Massachusetts have been fined more than $400,000 after allegedly violating child labor laws more than 1,000 times. That’s according to Boston News. The state’s attorney general, Maura Healey, said minors employed at several of Qdoba locations worked beyond 10:30p on school nights. Additionally the attorney general said her office found “18 instances of a minor working over 48 hours in a week, and 25 times that Qdoba didn’t have the work permit required when hiring a minor. Each violation carried a $250 penalty.”

Black Women’s Pay Gap Continues to Exist
62% of people “acknowledge that white men make more money than Black women on average” according to a new study from SurveyMonkey and That same survey says 44% of people are aware a pay gap exists between black women and white women. These are just some of the newest figures released on August 23, 2019, also known as Black Women’s Equal Pay Day. According to Caroline Fairchild, managing news editor for LinkedIn, for “black women making roughly 61 cents for every dollar white, non-Hispanic men make, it would take about 19 months of work — January of one year until August of the next — for Black women to make up for the gap.” Read more here.

Why Social Learning is the Need of the Hour
Social Learning is the “need of the hour.” That’s according to Debadrita Sengupta. If you’re unfamiliar with social learning, It’s all about people learning by observing and imitating others. Why: because it boosts learning interaction and expression and fosters healthy competition among other things. Details here.

How to Use OKR Reviews to Determine Compensation

One of the most common questions we get asked as leadership team coaches is how Objectives and Key Results (OKRs) should be used to determine salary, compensation, or bonuses. A growing number of organizations are eliminating the annual performance review altogether, but a need still remains for an OKR process involving metrics and KPIs to exist in order to determine compensation and promotions. Or does it?

First, let’s distinguish the two kinds of OKRs we are speaking about here. The one popularized by Google is where OKRs represent stretch goals for an organization, a team or individual should hit 60-70% of the target, with the intention of rewarding courage, innovation, and ambition (vs mere execution). The other is configured for performance milestones to hit a target of 100% in completion.

We will be discussing the first option here as it is the OKR process we normally encounter and recommend for clients. It is also the methodology where a coupling of OKRs to compensation can quickly become counter-productive, if not done right. However, much of what we share will also apply as solid practices for any goal setting or OKR process.

It’s not just about completing the OKR for the sake of the review
You’ve likely already heard that OKRs should not be tightly coupled with employees evaluation and compensation. The biggest reason for this is that your team will stop “shooting for the moon”. More precisely, tightly coupling the OKR and review will lead to overstated accomplishments, stunted innovation, and sand-bagging of goals.

Coupling OKRs with pay leads to overstated accomplishments and stunted innovation.

How come? Because OKR goals in this context work best when they stretch and push your people to become ambitious and self-driven. In other words, the OKR process work best when individuals and teams follow strong intrinsically motivating factors like autonomy, belonging, mastery, and taking on new challenges that inspire them.

Once compensation or promotion enters the picture, motivations shift towards far less effective extrinsic goals, which means the behavior can skew towards looking good or laying the fault on others for any failures. So status, money, or self-preservation are now running the show.

We’ve seen this again and again, going back to our days working in HR departments at large corporations when people play politics or try and game the system to meet targets and overall employee performance can actually drop.

Instead, we suggest the following practices for using OKRs in a review:

Make OKRs just one of the factors influencing compensation. When determining employee compensation, it is absolutely ok and perhaps even valuable to take into account the ambition of OKR goals and the success of results. However, it is also important to use OKRs as just one of the many data points available that influences these performance evaluations.

Looking for a software solution to help you manage OKRs and ongoing employee performance? Give 15Five a try!
Look at other work operations and overall behaviors. If you do decide to link OKRs with reviews – as many sales teams find useful, for example – make sure other operational goals and behaviors are a large part of the picture.

In instances where OKRs represent stretch goals, any individual or team is also going to have a lot of other important daily tasks and conversations that may or may not be directly reflected within the OKR process. Examples of these are progress in career growth and skills, contributions above and beyond job expectations, working collaboratively with others to drive total business revenue, and embracing the organizational culture and values.

For example, a manager may need to have 20 client interviews per quarter as part of their expected operational tasks, which is not covered by their OKR. But even if their supervisor measured this, an OKR review still wouldn’t reveal everything meaningful about the employee’s performance and growth. All these factors should matter a great deal in choosing how to reward an employee beyond their base income:

How are they doing these interviews? Do they go above and beyond in the expected quality of operations in them? Did they take on additional work? What is their attitude and level of professionalism? Do they consistently practice and model the company’s values?

Embrace collaboration over competition in OKR-related compensation. Consider offering the same bonus to a whole organization or team as a reward for meeting ambitious OKR goals. While individual rankings might have value in determining who to promote, giving bonuses to a whole team encourages greater collaboration, cooperation, and alignment in everyone in the team to drive results within your organization.

Compensation and the OKR Process
On the other hand, individualized compensation within a team for OKR goals can lead to internal competition within the team, with the ensuing politics, more rigid hierarchies, and unhealthy one-upmanship. Some organizations worry about social loafing factors if one or more individuals are not carrying their weight, and unfairly rewarding people by paying a whole group the same compensation for OKR results.

The roots of this potential issue are much better addressed through:

• Clearer agreements around work expectations for everyone on the team

• Mutual accountability processes where group members call out one another for not meeting their agreements or pulling their own weight in meeting a goal

• Peer-level feedback review sessions that can factor into determining promotions or salary raises for individuals

Here are some additional practices for streamlining your OKR process:

Have a gap between the two conversations. Development and financial reward should never be conflated, so be sure to separate your OKR review conversations from all employee evaluation and compensation conversations.

In Work Rules!, Laszlo Bock suggests making this separation at least one month apart to decouple the two processes in the minds of your employees. By taking away concerns around salary, ranking, or status in the OKR review process, you will free up your employees to become creative and step into a learner’s mindset. Contrast this with reactivity, manipulation, or defensiveness that may otherwise arise around meeting their goals.

Stay away from formulas. Fortunately, formulas for calculating compensation are not common these days, but we still hear about organizations attempting them. The belief that they can be effective goes back to the earliest days of the industrial era when productivity formulas became en vogue and have grown significantly more complex with time. The logic goes that if one can accurately measure employee productivity, then compensation should follow accordingly.

However, that’s not how it works. Formulas often fail to reward the highest performers, even if they include productivity metrics or OKRs. How come? Because attitude, professionalism, and the little daily contributions an employee makes have a large snowball effect that goes way beyond what any formula can capture or account for. Only a managerial and/or peer review process will reveal these important subjective factors.

Time intensive as they can be, there is no substitute for having these meaningful conversations to track and rate these crucial qualities. A common and successful approach that we have used, supported, and have seen work many times is manager-level calibration conversations about employees with the same standards applied transparently across the board. It is a much more fair and accurate process in evaluating the value of employee contributions.

Never use OKRs to determine base salary. This may sound obvious, but we continue to be surprised to hear people attempting this. Start with local market standards to actually determine base salary.

Accept that successful compensation processes are always subjective. This will no doubt irk some people, but it is perhaps the most important consideration. Even the most “objective” OKR metrics review process will not alleviate the inherent subjectivity of compensation, although the suggestions we have made here will ensure it far more fair and transparent. We have not seen any exceptions to this rule yet.

So can OKRs be part of a compensation and reward process? Certainly, and with all the guidelines and caveats shared above, they can actually help (rather than undermine) your employee motivation efforts.

When managers get together to discuss individual employee evaluation and compensation in a calibration session, be sure to have a checklist to call each other out on any implicit biases, errors, or blind-spots that compromise fairness or consistency in evaluations. We’ve seen these biases creep into many calibration scenarios unless they are specifically looked for.

Here are the most common examples of bias:

• Central Tendency – all employees are being rated about average (or above average if you live in the town of Lake Wobegon!).

• Leniency/Strictness bias – the appraiser tends to give all employees unusually high or unusually low ratings.

• Similar-to-Me bias – the appraiser inflates an employee evaluation because of a personal connection or identification with them, rather than objectively looking at their actual performance.

• Halo/Horns bias – An appraiser’s evaluation of an employee’s performance is biased/skewed because of the appraiser’s current judgment of the employee as being good (halo) or bad (horns), while ignoring new evidence to the contrary during the time period.

• Recency bias – an appraisal is based mostly on an employee’s most recent or memorable behavior, rather than on their behavior throughout the appraisal period.

• Contrast bias – an employee evaluation becomes skewed up or down from comparison with the employee that has just been evaluated before them.

We have personally seen many successful examples of goal setting and compensation processes that work really well together. Find and customize yours based on the OKR process in place, in addition to the needs, priorities, and culture of your organization and set your people up to performing as their best-selves at work. Please reach out to us if you have questions or would like more support with this process.

Source :–determining-compensation-within-your-okr-process&

Learning to Work with Intelligent Machines

t’s 6:30 in the morning, and Kristen is wheeling her prostate patient into the OR. She’s a senior resident, a surgeon in training. Today she’s hoping to do some of the procedure’s delicate, nerve-sparing dissection herself. The attending physician is by her side, and their four hands are mostly in the patient, with Kristen leading the way under his watchful guidance. The work goes smoothly, the attending backs away, and Kristen closes the patient by 8:15, with a junior resident looking over her shoulder. She lets him do the final line of sutures. She feels great: The patient’s going to be fine, and she’s a better surgeon than she was at 6:30.

Fast-forward six months. It’s 6:30 AM again, and Kristen is wheeling another patient into the OR, but this time for robotic prostate surgery. The attending leads the setup of a thousand-pound robot, attaching each of its four arms to the patient. Then he and Kristen take their places at a control console 15 feet away. Their backs are to the patient, and Kristen just watches as the attending remotely manipulates the robot’s arms, delicately retracting and dissecting tissue. Using the robot, he can do the entire procedure himself, and he largely does. He knows Kristen needs practice, but he also knows she’d be slower and would make more mistakes. So she’ll be lucky if she operates more than 15 minutes during the four-hour surgery. And she knows that if she slips up, he’ll tap a touch screen and resume control, very publicly banishing her to watch from the sidelines.

Surgery may be extreme work, but until recently surgeons in training learned their profession the same way most of us learned how to do our jobs: We watched an expert, got involved in the easier work first, and then progressed to harder, often riskier tasks under close supervision until we became experts ourselves. This process goes by lots of names: apprenticeship, mentorship, on-the-job learning (OJL). In surgery it’s called See one, do one, teach one.

Critical as it is, companies tend to take on-the-job learning for granted; it’s almost never formally funded or managed, and little of the estimated $366 billion companies spent globally on formal training in 2018 directly addressed it. Yet decades of research show that although employer-provided training is important, the lion’s share of the skills needed to reliably perform a specific job can be learned only by doing it. Most organizations depend heavily on OJL: A 2011 Accenture survey, the most recent of its kind and scale, revealed that only one in five workers had learned any new job skills through formal training in the previous five years.

Today OJL is under threat. The headlong introduction of sophisticated analytics, AI, and robotics into many aspects of work is fundamentally disrupting this time-honored and effective approach. Tens of thousands of people will lose or gain jobs every year as those technologies automate work, and hundreds of millions will have to learn new skills and ways of working. Yet broad evidence demonstrates that companies’ deployment of intelligent machines often blocks this critical learning pathway: My colleagues and I have found that it moves trainees away from learning opportunities and experts away from the action, and overloads both with a mandate to master old and new methods simultaneously.

How, then, will employees learn to work alongside these machines? Early indications come from observing learners engaged in norm-challenging practices that are pursued out of the limelight and tolerated for the results they produce. I call this widespread and informal process shadow learning.

Obstacles to Learning
My discovery of shadow learning came from two years of watching surgeons and surgical residents at 18 top-rated teaching hospitals in the United States. I studied learning and training in two settings: traditional (“open”) surgery and robotic surgery. I gathered data on the challenges robotic surgery presented to senior surgeons, residents, nurses, and scrub technicians (who prep patients, help glove and gown surgeons, pass instruments, and so on), focusing particularly on the few residents who found new, rule-breaking ways to learn. Although this research concentrated on surgery, my broader purpose was to identify learning and training dynamics that would show up in many kinds of work with intelligent machines.

To this end, I connected with a small but growing group of field researchers who are studying how people work with smart machines in settings such as internet start-ups, policing organizations, investment banking, and online education. Their work reveals dynamics like those I observed in surgical training. Drawing on their disparate lines of research, I’ve identified four widespread obstacles to acquiring needed skills. Those obstacles drive shadow learning.

1. Trainees are being moved away from their “learning edge.”
Training people in any kind of work can incur costs and decrease quality, because novices move slowly and make mistakes. As organizations introduce intelligent machines, they often manage this by reducing trainees’ participation in the risky and complex portions of the work, as Kristen found. Thus trainees are being kept from situations in which they struggle near the boundaries of their capabilities and recover from mistakes with limited help—a requirement for learning new skills.

The same phenomenon can be seen in investment banking New York University’s Callen Anthony found that junior analysts in one firm were increasingly being separated from senior partners as those partners interpreted algorithm-assisted company valuations in M&As. The junior analysts were tasked with simply pulling raw reports from systems that scraped the web for financial data on companies of interest and passing them to the senior partners for analysis. The implicit rationale for this division of labor? First, reduce the risk that junior people would make mistakes in doing sophisticated work close to the customer; and second, maximize senior partners’ efficiency: The less time they needed to explain the work to junior staffers, the more they could focus on their higher-level analysis. This provided some short-term gains in efficiency, but it moved junior analysts away from challenging, complex work, making it harder for them to learn the entire valuation process and diminishing the firm’s future capability.

2. Experts are being distanced from the work.
Sometimes intelligent machines get between trainees and the job, and other times they’re deployed in a way that prevents experts from doing important hands-on work. In robotic surgery, surgeons don’t see the patient’s body or the robot for most of the procedure, so they can’t directly assess and manage critical parts of it. For example, in traditional surgery, the surgeon would be acutely aware of how devices and instruments impinged on the patient’s body and would adjust accordingly; but in robotic surgery, if a robot’s arm hits a patient’s head or a scrub is about to swap a robotic instrument, the surgeon won’t know unless someone tells her. This has two learning implications: Surgeons can’t practice the skills needed to make holistic sense of the work on their own, and they must build new skills related to making sense of the work through others.

Benjamin Shestakofsky, now at the University of Pennsylvania, described a similar phenomenon at a pre-IPO start-up that used machine learning to match local laborers with jobs and that provided a platform for laborers and those hiring them to negotiate terms. At first the algorithms weren’t making good matches, so managers in San Francisco hired people in the Philippines to manually create each match. And when laborers had difficulty with the platform—for instance, in using it to issue price quotes to those hiring, or to structure payments—the start-up managers outsourced the needed support to yet another distributed group of employees, in Las Vegas. Given their limited resources, the managers threw bodies at these problems to buy time while they sought the money and additional engineers needed to perfect the product. Delegation allowed the managers and engineers to focus on business development and writing code, but it deprived them of critical learning opportunities: It separated them from direct, regular input from customers—the laborers and the hiring contractors—about the problems they were experiencing and the features they wanted.

A company’s deployment of AI may move trainees away from learning opportunities.

3. Learners are expected to master both old and new methods.
Robotic surgery comprises a radically new set of techniques and technologies for accomplishing the same ends that traditional surgery seeks to achieve. Promising greater precision and ergonomics, it was simply added to the curriculum, and residents were expected to learn robotic as well as open approaches. But the curriculum didn’t include enough time to learn both thoroughly, which often led to a worst-case outcome: The residents mastered neither. I call this problem methodological overload.

Shreeharsh Kelkar, at UC Berkeley, found that something similar happened to many professors who were using a new technology platform called edX to develop massive open online courses (MOOCs). EdX provided them with a suite of course-design tools and instructional advice based on fine-grained algorithmic analysis of students’ interaction with the platform (clicks, posts, pauses in video replay, and so on). Those who wanted to develop and improve online courses had to learn a host of new skills—how to navigate the edX user interface, interpret analytics on learner behavior, compose and manage the course’s project team, and more—while keeping “old school” skills sharp for teaching their traditional classes. Dealing with this tension was difficult for everyone, especially because the approaches were in constant flux: New tools, metrics, and expectations arrived almost daily, and instructors had to quickly assess and master them. The only people who handled both old and new methods well were those who were already technically sophisticated and had significant organizational resources.

4. Standard learning methods are presumed to be effective.
Decades of research and tradition hold trainees in medicine to the See one, do one, teach one method, but as we’ve seen, it doesn’t adapt well to robotic surgery. Nonetheless, pressure to rely on approved learning methods is so strong that deviation is rare: Surgical-training research, standard routines, policy, and senior surgeons all continue to emphasize traditional approaches to learning, even though the method clearly needs updating for robotic surgery.

Sarah Brayne, at the University of Texas, found a similar mismatch between learning methods and needs among police chiefs and officers in Los Angeles as they tried to apply traditional policing approaches to beat assignments generated by an algorithm. Although the efficacy of such “predictive policing” is unclear, and its ethics are controversial, dozens of police forces are becoming deeply reliant on it. The LAPD’s PredPol system breaks the city up into 500-foot squares, or “boxes,” assigns a crime probability to each one, and directs officers to those boxes accordingly. Brayne found that it wasn’t always obvious to the officers—or to the police chiefs—when and how the former should follow their AI-driven assignments. In policing, the traditional and respected model for acquiring new techniques has been to combine a little formal instruction with lots of old-fashioned learning on the beat. Many chiefs therefore presumed that officers would mostly learn how to incorporate crime forecasts on the job. This dependence on traditional OJL contributed to confusion and resistance to the tool and its guidance. Chiefs didn’t want to tell officers what to do once “in the box,” because they wanted them to rely on their experiential knowledge and discretion. Nor did they want to irritate the officers by overtly reducing their autonomy and coming across as micromanagers. But by relying on the traditional OJL approach, they inadvertently sabotaged learning: Many officers never understood how to use PredPol or its potential benefits, so they wholly dismissed it—yet they were still held accountable for following its assignments. This wasted time, decreased trust, and led to miscommunication and faulty data entry—all of which undermined their policing.

Shadow Learning Responses
Faced with such barriers, shadow learners are bending or breaking the rules out of view to get the instruction and experience they need. We shouldn’t be surprised. Close to a hundred years ago, the sociologist Robert Merton showed that when legitimate means are no longer effective for achieving a valued goal, deviance results. Expertise—perhaps the ultimate occupational goal—is no exception: Given the barriers I’ve described, we should expect people to find deviant ways to learn key skills. Their approaches are often ingenious and effective, but they can take a personal and an organizational toll: Shadow learners may be punished (for example, by losing practice opportunities and status) or cause waste and even harm. Still, people repeatedly take those risks, because their learning methods work well where approved means fail. It’s almost always a bad idea to uncritically copy these deviant practices, but organizations do need to learn from them.

Following are the shadow learning practices that I and others have observed:

Seeking struggle.
Recall that robotic surgical trainees often have little time on task. Shadow learners get around this by looking for opportunities to operate near the edge of their capability and with limited supervision. They know they must struggle to learn, and that many attending physicians are unlikely to let them. The subset of residents I studied who did become expert found ways to get the time on the robots they needed. One strategy was to seek collaboration with attendings who weren’t themselves seasoned experts. Residents in urology—the specialty having by far the most experience with robots—would rotate into departments whose attendings were less proficient in robotic surgery, allowing the residents to leverage the halo effect of their elite (if limited) training. The attendings were less able to detect quality deviations in their robotic surgical work and knew that the urology residents were being trained by true experts in the practice; thus they were more inclined to let the residents operate, and even to ask for their advice. But few would argue that this is an optimal learning approach.

When legitimate means can no longer achieve a goal, deviance results.

What about those junior analysts who were cut out of complex valuations? The junior and senior members of one group engaged in shadow learning by disregarding the company’s emerging standard practice and working together. Junior analysts continued to pull raw reports to produce the needed input, but they worked alongside senior partners on the analysis that followed.

In some ways this sounds like a risky business move. Indeed, it slowed down the process, and because it required the junior analysts to handle a wider range of valuation methods and calculations at a breakneck pace, it introduced mistakes that were difficult to catch. But the junior analysts developed a deeper knowledge of the multiple companies and other stakeholders involved in an M&A and of the relevant industry and learned how to manage the entire valuation process. Rather than function as a cog in a system they didn’t understand, they engaged in work that positioned them to take on more-senior roles. Another benefit was the discovery that, far from being interchangeable, the software packages they’d been using to create inputs for analysis sometimes produced valuations of a given company that were billions of dollars apart. Had the analysts remained siloed, that might never have come to light.

Tapping frontline know-how.
As discussed, robotic surgeons are isolated from the patient and so lack a holistic sense of the work, making it harder for residents to gain the skills they need. To understand the bigger picture, residents sometimes turn to scrub techs, who see the procedure in its totality: the patient’s entire body; the position and movement of the robot’s arms; the activities of the anesthesiologist, the nurse, and others around the patient; and all the instruments and supplies from start to finish. The best scrubs have paid careful attention during thousands of procedures. When residents shift from the console to the bedside, therefore, some bypass the attending and go straight to these “superscrubs” with technical questions, such as whether the intra-abdominal pressure is unusual, or when to clear the field of fluid or of smoke from cauterization. They do this despite norms and often unbeknownst to the attending.

And what about the start-up managers who were outsourcing jobs to workers in the Philippines and Las Vegas? They were expected to remain laser focused on raising capital and hiring engineers. But a few spent time with the frontline contract workers to learn how and why they made the matches they did. This led to insights that helped the company refine its processes for acquiring and cleaning data—an essential step in creating a stable platform. Similarly, some attentive managers spent time with the customer service reps in Las Vegas as they helped workers contend with the system. These “ride alongs” led the managers to divert some resources to improving the user interface, helping to sustain the start-up as it continued to acquire new users and recruit engineers who could build the robust machine learning systems it needed to succeed.

Redesigning roles.
The new work methods we create to deploy intelligent machines are driving a variety of shadow learning tactics that restructure work or alter how performance is measured and rewarded. A surgical resident may decide early on that she isn’t going to do robotic surgery as a senior physician and will therefore consciously minimize her robotic rotation. Some nurses I studied prefer the technical troubleshooting involved in robotic assignments, so they surreptitiously avoid open surgical work. Nurses who staff surgical procedures notice emerging preferences and skills and work around blanket staffing policies to accommodate them. People tacitly recognize and develop new roles that are better aligned with the work—whether or not the organization formally does so.

Consider how some police chiefs reframed expectations for beat cops who were having trouble integrating predictive analytics into their work. Brayne found that many officers assigned to patrol PredPol’s “boxes” appeared to be less productive on traditional measures such as number of arrests, citations, and FIs (field interview cards—records made by officers of their contacts with citizens, typically people who seem suspicious). FIs are particularly important in AI-assisted policing, because they provide crucial input data for predictive systems even when no arrests result. When cops went where the system directed them, they often made no arrests, wrote no tickets, and created no FIs.

Recognizing that these traditional measures discouraged beat cops from following PredPol’s recommendations, a few chiefs sidestepped standard practice and publicly and privately praised officers not for making arrests and delivering citations but for learning to work with the algorithmic assignments. As one captain said, “Good, fine, but we are telling you where the probability of a crime is at, so sit there, and if you come in with a zero [no crimes], that is a success.” These chiefs were taking a risk by encouraging what many saw as bad policing, but in doing so they were helping to move the law enforcement culture toward a future in which the police will increasingly collaborate with intelligent machines, whether or not PredPol remains in the tool kit.

Curating solutions.
Trainees in robotic surgery occasionally took time away from their formal responsibilities to create, annotate, and share play-by-play recordings of expert procedures. In addition to providing a resource for themselves and others, making the recordings helped them learn, because they had to classify phases of the work, techniques, types of failures, and responses to surprises.

Faculty members who were struggling to build online courses while maintaining their old-school skills used similar techniques to master the new technology. EdX provided tools, templates, and training materials to make things easier for instructors, but that wasn’t enough. Especially in the beginning, far-flung instructors in resource-strapped institutions took time to experiment with the platform, make notes and videos on their failures and successes, and share them informally with one another online. Establishing these connections was hard, especially when the instructors’ institutions were ambivalent about putting content and pedagogy online in the first place.

Shadow learning of a different type occurred among the original users of edX—well-funded, well-supported professors at topflight institutions who had provided early input during the development of the platform. To get the support and resources they needed from edX, they surreptitiously shared techniques for pitching desired changes in the platform, securing funding and staff support, and so on.

Learning from shadow learners.
Obviously shadow learning is not the ideal solution to the problems it addresses. No one should have to risk getting fired just to master a job. But these practices are hard-won, tested paths in a world where acquiring expertise is becoming more difficult and more important.

The four classes of behavior shadow learners exhibit—seeking struggle, tapping frontline know-how, redesigning roles, and curating solutions—suggest corresponding tactical responses. To take advantage of the lessons shadow learners offer, technologists, managers, experts, and workers themselves should:

ensure that learners get opportunities to struggle near the edge of their capacity in real (not simulated) work so that they can make and recover from mistakes
foster clear channels through which the best frontline workers can serve as instructors and coaches
restructure roles and incentives to help learners master new ways of working with intelligent machines
build searchable, annotated, crowdsourced “skill repositories” containing tools and expert guidance that learners can tap and contribute to as needed
The specific approach to these activities depends on organizational structure, culture, resources, technological options, existing skills, and, of course, the nature of the work itself. No single best practice will apply in all circumstances. But a large body of managerial literature explores each of these, and outside consulting is readily available.

More broadly, my research, and that of my colleagues, suggests three organizational strategies that may help leverage shadow learning’s lessons:

1. Keep studying it.
Shadow learning is evolving rapidly as intelligent technologies become more capable. New forms will emerge over time, offering new lessons. A cautious approach is critical. Shadow learners often realize that their practices are deviant and that they could be penalized for pursuing them. (Imagine if a surgical resident made it known that he sought out the least-skilled attendings to work with.) And middle managers often turn a blind eye to these practices because of the results they produce—as long as the shadow learning isn’t openly acknowledged. Thus learners and their managers may be less than forthcoming when an observer, particularly a senior manager, declares that he wants to study how employees are breaking the rules to build skills. A good solution is to bring in a neutral third party who can ensure strict anonymity while comparing practices across diverse cases. My informants came to know and trust me, and they were aware that I was observing work in numerous work groups and facilities, so they felt confident that their identities would be protected. That proved essential in getting them to open up.

2. Adapt the shadow learning practices you find to design organizations, work, and technology.
Organizations have often handled intelligent machines in ways that make it easier for a single expert to take more control of the work, reducing dependence on trainees’ help. Robotic surgical systems allow senior surgeons to operate with less assistance, so they do. Investment banking systems allow senior partners to exclude junior analysts from complex valuations, so they do. All stakeholders should insist on organizational, technological, and work designs that improve productivity and enhance on-the-job learning. In the LAPD, for example, this would mean moving beyond changing incentives for beat cops to efforts such as redesigning the PredPol user interface, creating new roles to bridge police officers and software engineers, and establishing a cop-curated repository for annotated best practice use cases.

3. Make intelligent machines part of the solution.
AI can be built to coach learners as they struggle, coach experts on their mentorship, and connect those two groups in smart ways. For example, when Juho Kim was a doctoral student at MIT, he built ToolScape and LectureScape, which allow for crowdsourced annotation of instructional videos and provide clarification and opportunities for practice where many prior users have paused to look for them. He called this learnersourcing. On the hardware side, augmented reality systems are beginning to bring expert instruction and annotation right into the flow of work. Existing applications use tablets or smart glasses to overlay instructions on work in real time. More-sophisticated intelligent systems are expected soon. Such systems might, for example, superimpose a recording of the best welder in the factory on an apprentice welder’s visual field to show how the job is done, record the apprentice’s attempt to match it, and connect the apprentice to the welder as needed. The growing community of engineers in these domains have mostly been focused on formal training, and the deeper crisis is in on-the-job learning. We need to redirect our efforts there.

For thousands of years, advances in technology have driven the redesign of work processes, and apprentices have learned necessary new skills from mentors. But as we’ve seen, intelligent machines now motivate us to peel apprentices away from masters, and masters from the work itself, all in the name of productivity. Organizations often unwittingly choose productivity over considered human involvement, and learning on the job is getting harder as a result. Shadow learners are nevertheless finding risky, rule-breaking ways to learn. Organizations that hope to compete in a world filling with increasingly intelligent machines should pay close attention to these “deviants.” Their actions provide insight into how the best work will be done in the future, when experts, apprentices, and intelligent machines work, and learn, together.

A version of this article appeared in the September–October 2019 issue of Harvard Business Review.
Matt Beane is an assistant professor of technology management at the University of California, Santa Barbara, and a research affiliate with MIT’s Initiative on the Digital Economy.


Keys to effective succession planning: Talent management special report

Are changes in your market forcing a change in strategy that will demand new talent?

Have one or more of your long-time stars started thinking about moving to a competitor or retiring?

Or are you just trying to make sure the wheels keep turning for a few weeks or months if one of your top people gets sick or dies unexpectedly?

Succession planning is a talent management must-do for organizations of all sizes, whether a global corporation, a small non-profit, a mid-sized college or a family business with a dozen employees.

Long-term success depends on creating a plan for how you’ll keep your team moving forward when you lose a key player or encounter a skills gap that must be filled quickly.

It brings focus to the process of identifying top performers, employees with strong potential and the people that you need to push hard or push out.

For employees, the succession planning process translates into stretch opportunities that can help them learn new skills, advance their careers, increase their value to the team and boost earning power. All of those positives can translate into an increased commitment to your organization.

What are you planning for?
It’s important to differentiate succession planning from other strategic staffing plans, says William J. Rothwell in a Dale Carnegie white paper entitled The Nuts and Bolts of Succession Planning.

What it’s not is replacement planning, Rothwell says. That’s the process of identifying individuals within an organization, and often in the same division or department, who would be best-equipped to serve as backups for current employees.

While replacement planning is an important part of an organization’s overall operating strategy, succession planning takes a much broader viewpoint – it encompasses the total operation, rather than individual positions, departments or divisions.

As Robert E. Lewis and Robert J. Heckman put it in their oft-cited paper, Talent management: A critical review:

Consider the following question, If you were to begin the process of constructing a building how would you go about it? Would you assemble a group of the best professionals in each necessary craft (plumbing, electrical systems, carpentry, etc.) and let them define your building? Or, would you start with an analysis of the relationship between “construction practices” and some outcome you hope to achieve (building longevity or cost of operation)? Probably not. You probably would first meet with an architect to begin drawing a blueprint after considering a series of key questions such as, what do you hope to accomplish with this building? Will those goals appeal to the intended customers (tenants or shoppers)? What alternatives for orienting the building on its site best accomplish its purpose?

It is always important to be clear about the end-goal of any strategic planning effort and succession planning is no different.

The first thing to do is figure out your plan’s target and scope. To be effective, the succession planning process should be:

Formal. While a succession planning process needs to match an organization’s overall culture, whether buttoned down and hierarchical or more casual and egalitarian, everyone involved needs to understand that this is a well-defined process with support from top leadership and mission-critical outcomes at stake.

Comprehensive. It’s tempting to think of succession planning as applying only to senior leadership roles, but an effective plan will look at critical positions and people at every level of the organization.

Strategically Linked. Every aspect of your succession plan needs to support the organization’s overall strategy. That is the guiding star that will help to define the kinds of people and types of training you need to put in place as you build a talent pipeline to the future.

Linking Succession Planning to Your Strategic Plan
A paint-by-numbers succession planning effort is doomed to give you an uninspired and amateurish result. Only by matching your succession planning to your organization’s guiding strategy can you confidently identify the positions, skills and employees needed to succeed.

Whatever your organization’s size and your target, a succession plan should focus on four specific outcomes:

Identify mission-critical positions and any current or impending talent gaps – based on the strategic opportunities you identify and how you create competitive advantage. Which jobs and skills are must-haves? Do those positions already exist or do we need to create them?
Identify employees at every level who have the potential to assume greater responsibility advancing your organization’s strategic goals and how they fit together – what combination of A, B and C performers do we need and how do we attract and keep them?
Encourage meaningful investment in a training and development program for high-potential employees – be ready to defend allocating resources to a given talent pool(s) or to talent in
general rather than technology, marketing or other investments.
What is the process for revisiting and revising your succession plan as conditions change?
With those factors in mind, how do you go about building and refining a succession plan? Here’s some help.

Building a team
You’ve committed to building a succession plan, now its time to think about who you need on the team who will do the work. You need to decide who will design the plan and also determine who will be responsible for implementing and evolving your plan when it’s in place.

You’ll want to include people with different skills and from a variety of functions when assembling the succession planning team.

Of course, in smaller organizations, team members are going to wear multiple hats.

Some of the needed skills include:

Organization and process-orientation. While the succession planning effort itself needs to focus on goals, you’ll want someone on the team who will keep things moving along during the plan development phase.

That person needs to have enough authority to give other members assignments and to get answers from various departments.

Organizational knowledge. The team needs to include someone with a solid handle on most of the organization’s existing job descriptions and insight into any new positions that might be needed to accomplish the goals you’ve set.

And at least one member of the team should have connections throughout the organization and know who they can approach to build support for the succession planning effort.

Effective communication. Like many other strategic initiatives, the information gathering phase of succession planning can create nervousness and give rise to rumors about job changes (often true) or massive job losses (often false).

Keeping the rest of your organization working productively while this is going on requires skillful communication to share enough information to keep a lid on any panic-button pushers.

If handled well, giving employees insight into the process can help reassure them that company leaders are preparing the organization for the long haul.

Identifying strengths and weaknesses
So, you’ve committed to building a succession plan and picked your team. What’s their next step?

It’s time to brainstorm. What are all the internal and external factors that your plan needs to account for? Here are some questions to consider:

Organizations face increasingly rapid changes in macroeconomic, industry and social trends — which ones can you anticipate and prepare for?
Competition can come from anywhere in the world. How will you keep an eye on — and respond to — new challenges?
Does your team have all the skills you’ll need? Can training fill the gaps or will you need to hire?
Boomers are retiring and the generational mix of your workforce will look very different soon. What do demographic changes mean for your organization?
The research is clear: companies with a diverse workforce outperform the competition. How will you leverage succession planning to increase diversity in your line organization and leadership team?
Do you need to change your org structure and talent management processes to match these challenges?
Build or Buy? Finding the right people
The first phase of this part of the process is to identify key/critical positions, ideally at every level of the organization. A position is determined to be key or critical under the following criteria:

Organizational structure — The position is a key contributor in achieving the organization’s mission
Key task — The position performs a critical task that would stop or hinder vital functions from being performed if it were left vacant
Specialized competencies — The position requires a specialized or unique skill set that is difficult to replace
Geography — The position is the only one of its kind in a particular location or it would be difficult for a similar position in another location to carry out its functions remotely,
Potential high turnover job classes — Positions in danger of “knowledge drain” due to impending retirements or high market demand for the skill set, and
Future needs — based on the SWOT analysis that launched the succession management project, positions that need to be created and defined.
Skillset analysis
Once critical positions and areas at risk of high turnover are identified, it’s time to look at the specific competencies required to do those jobs effectively.

The questions you need to ask during the skill set analysis are closely related to the strategic questions your team addressed in the first part of this process:

What are the external and internal factors affecting this specific position?
How will the position be used in the future?
What competencies or skillsets will be required?
What is the current bench strength?
How will you provide stretch opportunities to high-potential employees?
What is the path from where they are to where you need them to be? and
What are the gaps (competencies or skillsets not possessed by current employees)?
At the end of this analysis you will have the answer to the most important succession planning question: “Can we develop our existing pool of internal candidates quickly enough or must we ramp up our search for strong outside candidates?”

The good news is you now have a clear idea of what you have and what you are still looking for and can move on to the next steps in the process, which we look at in other reports in our HR Morning talent management series:

designing the right training programs for each talent pool based on strategic importance, available resources and growth path
refining your recruiting plan to maximize your chances to get the most from your recruiting efforts, to use your time and energy wisely and effectively, and to pursue only the most likely paths to recruiting success and
retaining key personnel.

Source :

Matching talent to value

We’re really testing for, where is the hard-core
work and strategic decision making and leadership
happening? In the cybersecurity example, one of
the things that we look for frequently and have
discussions about is, how much value is at risk?
And how much do we want to assign in terms of
protecting risk? That’s where cybersecurity roles
come in rather frequently.
Mike Barriere: One thing I would emphasize is the
front end of this. When you think of value drivers,
let’s say you think of some that might be around
organic growth, let’s say revenue growth as an
example. You start to look across the organization.
Let’s say you want to grow a billion dollars on the
top line. You look at your commercial groups, sales,
and marketing. You look at product or operations,
depending on your business, and then you look at
those enabling functions.
You could literally take a billion dollars of top-line
revenue growth and say, OK, 30 percent of that has
to come from sales and marketing. They have to go
out and create the demand. But obviously, that’s
not all of it. We have to deliver the product. Maybe
there’s an R&D contribution, or maybe there’s a
real operations component. These are the creator
roles, because they’re so important to generate
the demand for, in this case, the top-line revenue
growth, as well as the delivery.
So maybe 30 percent is in sales and marketing,
another 30 percent in operations. Now you still
have 40 percent of that value, which could come
from these enabling functions like technology or
HR to provide the talent to the sales teams or the
operations teams. You start to build this mapping—
and we have a great way to model this—of the
value driver. What functions are contributing what
percentage of that value? Then that’s where you
get into the valuation of the role.
Let’s say you take 30 percent of a billion [dollars]
into sales and marketing, there’s a value there, and
you say, OK, well, there’s seven roles in marketing
that are absolutely essential to grow top-line
revenue. They’re those key account managers
that we’ve been using as an example. You then
distribute that value across those roles. There
are some heuristics that we work with, some
percentages, whether you’re a creator or enabler
type of role. But the beauty of putting this into a
model is that then you can do sensitivity analysis.
So maybe it’s 60 percent on the front end and only
20 percent on the product side, or vice versa. You
can quickly see the impact on the kind of roles that
pop up.
Simon London: So we’ve had a robust conversation
as a management team about what our value
agenda is. We’ve hammered out some areas of
ambiguity. We’ve done the hard work of then
identifying the roles that are really going to matter
over the next few years. What happens next?
Mike Barriere: The fun stuff. First is, if these are
the roles, what does success look like? I come
from HR, so I can pick on myself. Usually, HR
doesn’t have up-to-date, nimble, and dynamic role
descriptions that capture what a role needs to
do today.
This is part of the issue, Simon, where a lot of our
HR processes are dated and not designed for this
period of exponential change and disruption. The
first thing, when you say this is a critical role, we all
agree it’s one of the top 50, is to define, and Carla
mentioned it, a role card.
A role card consists of the mission for the job, and
then in language of jobs to be done, what are the
five to seven things that are most important that
this role has to accomplish to be successful? You
also know the value that the role should capture.
It should be written in a language that’s clear,
concise, and tied to those value drivers that we
talked about.
Some roles might hit two or three or even four
value drivers, and you want to be clear that this
is exactly what needs to be done in the role to
capture that. That’s half the role card. The other
half is, how are you going to assess somebody
against those requirements?
Matching talent to value 5
Role descriptions were designed to stand the test
of time. A lot of them are old, static, they’ve been
around. Compensation teams use them to price
jobs relative to market. Search firms use them.
We’re talking about something different. We need a
much more nimble, concise way to think about jobs
and design jobs, so you don’t get this phenomenon
of double-, triple-hatting somebody, or having
them do 60 percent non–value-add work.
We want to be really crisp, and that’s why we
even changed the language to role card, not a job
description, because you want to be clear about,
this is what this role needs to do over, like, a threeyear time horizon, and this is the value that you can
measure success in the role against.
Simon London: It’s a hybrid between a role
description and your annual objectives, somewhere
in the middle there. It’s a different critter.
Carla Arellano: Yes, very much so. When we’ve
developed these baseball cards or role cards for
leaders, one of the first things that happens is
the reaction that, oh, we have the person in this
role doing 50 other things that are not on that list,
usually followed by, how would we expect them to
actually deliver this, when we have them doing all
these other things. And are they best placed for
everything else?
The other thing, where Mike was going with this
notion of the knowledge, skills, attributes, and
experiences, is that most organizations have
moved people into roles based on the fact that they
were successful in this other role, or somebody
likes them and knows them, and they’ve worked
well together for a very long time.
But the focus should instead be on, do we want
somebody who can build a team of very diverse
profiles to go do something different or hard?
Or do we want somebody who can be a strategic
negotiator with customers? Those are very
different requirements for a role. Getting specific
on those and how you might measure it can be
hard. But the rewards are very high.
Mike Barriere: Now we’re going into the third
step, which is the matching. We defined the value
agenda, the drivers, we know where the critical
roles are in the organization, we write the role cards,
and now it’s time to assess and match the talent.
A lot of times you’ll find that you don’t have your
best talent in a good percentage of the critical
roles and your best talent is somewhere else and
not even on the radar for these kind of roles. That’s
why the matching is really important.
Simon London: What’s the percentage of
mismatch? When you do this for the first time,
do you find that organizations are 70 percent
mismatched? Or in most organizations is it more
like 10 percent of the roles cause some serious
head-scratching and conversations once these
cards are laid out?
Mike Barriere: What comes to mind is a recent
experience where we found 45 percent of the roles
were a great match. The incumbent was the best
fit. Twenty percent to 30 percent typically have
gaps, but they’re addressable. They do have some
gaps, but they are the best talent you have, and if
you have clarity about how to help them address
their gaps—which we can get to in a minute, the
techniques for that—typically, it could be in the
ballpark of 20 to 30 percent that are mismatched.
It doesn’t mean you fire them. It means that there’s
probably a better role for them, or you’ve got to
look either internally or maybe go external.
These are not the roles you want to give somebody
a stretch assignment for. These are your critical
roles that are going to deliver value, so you really
want to put your best players in these roles.
Carla Arellano: I think the other thing alongside
that, Simon, that I’ve seen frequently is looking at
the team around a role or looking at the team of
roles. Because you might find that there’s a gap
of an incumbent to a role. You might also find that
across a team, there’s a core missing experience
or capability set that you need to complement in
some way.
6 Matching talent to value
I have an organization that’s been going through
a restructuring for quite a while. None of the
individuals in critical roles had actual restructuring
experience, which was a little bit of a flag. It wasn’t
that every single one of those roles needed to have
it, but it was important that at least one or two of
them did to get there. There’s that individual view,
and then there’s a little bit of a team view as well.
Simon London: This goes back to my slight
skepticism about how easy it is to assign value to
roles, because I think we’re acknowledging that so
much of what goes on in an organization is a matter
of team production. There are teams delivering
value, not individuals.
If you say you need to reinforce a role by
bringing in somebody else as a wingman—it’s
a gendered phrase—but a wingman effectively
who compensates for one element. Aren’t you
immediately beginning to undermine this idea that
it’s that role and that role alone that’s delivering
the value?
Mike Barriere: No, I have a very strong view on that.
A critical role leader needs to absolutely leverage
the team—and it might not even be their own team.
There could be an important collaboration across
function or function to a business unit or across a
business unit.
Now we’re getting into, how do you optimize value
capture for that critical role leader? They’re on
the hook. You need somebody responsible. If you
just try to tackle it from the team dynamic, you’re
going to miss something. We like to think that
there’s a critical role leader that’s on the hook, but
part of their success is going to be driven by how
well they build the team around them and how well
they can build cross-functionally or collaborate
horizontally in the organization. It doesn’t put teams
aside and say it’s only the role that’s important.
To be successful in the role, team competency is
absolutely essential.
Simon London: Ultimately somebody has to be
on the hook, though. I think that’s the message.
Somebody has to own the delivery of that value.
Mike Barriere: Exactly, because that happens a
lot. You take a team approach or you do something
broad strokes without really having that person
who you look at and say, that’s their role, the
value is tied to them in that role. But the way they
succeed involves not only the team, it involves
the workforce. And does the workforce have the
capabilities? What about the culture, how they
run the place. Do they take out organizational
bureaucracy, so they can move with speed and
agility? A lot of the organizational things light up
here, but this is the front end to prioritize the role.
Then how do you make a leader successful in that
role vis-à-vis the top team and the organization and
the capabilities and the culture to run the place?
Simon London: Do you get pushback from
organizations at that cultural level? That doing this
in this way just feels kind of countercultural?
Carla Arellano: Definitely. Maybe more than
pushback, there tends to be a very deep-seated
philosophical question for organizations. One,
about what’s the difference between critical and
important, and how do we make sure that we’re not
creating a stratification of our workforce and making
some people feel more important than others?
A lot of organizations—this is going to sound a little
bit harsh—confuse fairness with, everybody has
to get everything the same no matter what. They
end up struggling to feel comfortable doing the
approach, and then figuring out what they would do
differently for that group of critical roles than they
might do for other roles in the organization.
There’s also this sense of what’s the urgency?
And where Mike was going about the culture
and the agility that you create in an organization,
to make sure that those critical roles are able
to be successful and deliver on the value. That
likely will require a shift from the way things are
normally done to drive forward a different sense
of urgency than you might have had in terms of
certain things.
Mike Barriere: I have two principles related to
this, Carla. It’s that development matters for
Matching talent to value 7
everybody. Every employee in an organization
should have the opportunity to reach his or her full
potential, and you want to provide that, especially
as leaders.
While that’s important, it’s also important to
think about the future of the company. Therefore,
who are those talents that you absolutely want
to get into the most critical roles? You need to
do both, but many times, we only do the broader
set. Because there’s a culture that it’s one team,
rather than there being a specific set of certain
people. That’s why we really put the emphasis on
the roles that matter, and then not only looking for
the 50 people, but what’s the succession pipeline
behind them?
There could be a couple hundred people that you’re
also developing to be ready to take those roles in
the future.
Simon London: It’s true, every organization treats
people differently. It’s just that we’re used to doing
it based on hierarchy. The difference here is, we’re
focusing on value first and role first, and that can
feel a little unusual. But it’s not like everybody in
an organization gets the same treatment today
anyway. Hierarchy takes care of that.
Mike Barriere: I’d add that it is hierarchy, but it’s
also the definition of top talent, or when you use
the nine box to try to find out who are our highest
potentials. Companies segment already, but they
segment the talent, not the roles. Then you need
more of a fact base of who has the potential? And
potential for what?
We’re saying it’s potential to be successful in
critical roles. To leverage the fact that most
companies are already segmenting talent, we’re
saying, segment the roles first, and then match
the talent and get that right, and then broaden
it. Broaden development and leadership and
opportunity and tackle it that way.
Simon London: If you think about what a CHRO
needs and what a good HR function needs in a
company that’s going to do all this well, what are
the gaps that we often see?
Mike Barriere: The first part is guts. The CHRO
has to have the moxie to push up against the
CEO and the CFO, the exec team, and call out if
the value agenda is not clear. If it’s ambiguous
or fluffy or ownership is not quite there, this is
the moment that a CHRO can really say, “Hey, if
we want to leverage our human assets, we need
much more clarity about drivers and where in
the organization is the most critical, because we
want to deploy our talent just like you would think
about deploying the financial capital.” The role
of the CHRO, particularly with the G3, is to
increase awareness and to lead. We need to take
our value agenda and our value drivers into
the organization.
From there, the CHRO does need a good sense
of the business and the industry and what are the
trends. To Carla’s point, the CHRO is an officer first,
and you happen to have an HR talent tool kit, but
the role is about understanding the business, the
business dynamics, the ways that the company can
achieve value in the future.
Carla Arellano: Mike, one of the things that you
said really jumps out at me. CHROs are probably
most comfortable, but I think if you get into some of
their teams, there tends to be less comfort. And it’s
this concept of really knowing the business and the
industry and how it makes money.
What I tend to find is, it might be that an HR leader
understands it but might feel uncomfortable
engaging a business leader on where value is going
to come from, and why they think they’re going
to achieve a certain margin, or what is the plan to
capture that digital growth?
There’s something about what you said earlier
on, having the moxie but also enabling that in
your team and giving them the comfort level
that they have just as much right, as well as
the demand on them to really understand what
needs to happen and by whom, so that they can
engage productively.
8 Matching talent to value
Simon London: So, I think we’re out of time for
today. Carla and Mike, thank you so much for
doing this.
Carla Arellano: Thank you, Simon, it was a pleasure