Tech Talent Spotlight: Jennifer Yancie

15 Minutes

In a world where technology moves faster than policy, and trust is becoming the defining fac...

In a world where technology moves faster than policy, and trust is becoming the defining factor of progress, leaders like Jennifer Yancie are shaping how organizations navigate the future of AI and data protection. As a Strategic AI Advisor at the University of San Francisco and a global privacy and governance expert, Jennifer’s career has been built at the intersection of innovation and accountability. We sat down with her to discuss the lessons she’s learned leading enterprise-wide security programs, her thoughts on responsible AI, and the human side of leadership in technology.


Thanks for talking to us, Jennifer! I thought initially that you could set the scene and tell us a bit about your current role, what projects and priorities you have at the moment, and what you’re working on! 

Firstly, thank you for having me! 

Currently, I serve on the Strategic AI Advisory Board at the University of San Francisco, where I collaborate with faculty and industry leaders to shape the curriculum and provide guidance on responsible AI frameworks.

My recent work has been centred around the intersection of AI, privacy, and governance at the enterprise level. This includes evaluating AI-related change requests, embedding privacy principles by design, and co-developing policies that align with evolving global regulations. I’ve also led enterprise-wide privacy integration initiatives, embedding controls into workflows and streamlining approval processes to strengthen compliance and transparency. 

Across both my advisory and enterprise experience, my focus has really been on balancing innovation with accountability and helping organizations and the next generation of leaders prepare for the opportunities and challenges that AI brings.


Before we get into talking about your technical experience, I first want to take us back and look to what first sparked your interest in security data protection. How did you get started, and what experiences shaped your career today? 

My interest in security and data protection was actually sparked during a college internship at a small company called Internet Security Systems. Up to that point, I’d always been fascinated by technology in general, but that experience really opened my eyes to the importance of protecting information and securing systems

I started out in their managed security services department, where I learned the fundamentals of monitoring, threat detection, and data safeguarding. It was also where I learned how essential collaboration and communication are, especially when working with clients and cross-functional teams. Those early experiences taught me that security isn’t just about technology, it’s about trust, accountability, and protecting people’s information. That perspective has shaped my entire career and ultimately led me to specialize in data protection, privacy governance, and now, AI governance.


It's interesting how ‘people-led’ your approach is. It's very human and centred on trust, which is not something you always associate with tech roles! But it's really interesting to hear that perspective. Building on that, my next question is around how you've built and led security programs that have had a real huge impact on organizations and on people.

When you reflect on that, is there a moment or a few moments that stand out most to you that define your journey in tech or define your distinctive human approach? 

One of the most defining moments in my career was leading my very first security program at Ernst & Young. Up until that point, I had always supported programs by contributing my technical expertise and influencing stakeholders, but this was the first time I truly stepped into a leadership role.

The program was global in scope and impacted more than 250,000 employees across 150+ countries. I managed a diverse, global team of 14 people spanning four countries, and together we implemented data protection technologies and developed processes to prevent data loss. 

It was an enormous challenge, but also one of my proudest achievements. That experience proved to me that I could lead effectively at scale while staying grounded in collaboration and trust. It also laid the foundation for the senior leadership roles I went on to hold at Equifax and Capital One, where that balance between people, process, and technology continued to guide my approach.

 

Is there a time for you where you had to lead through a critical incident or in uncertainty? How did you approach bringing clarity, confidence and leadership into that room? What helps you make that change from expert to leader? 

One experience that really stands out for me was during my time at Equifax. Early on in my role, I discovered that employees were unknowingly uploading sensitive data to unauthorized cloud sites such as Dropbox and Google Drive. Given the nature of Equifax’s work … it was critical to act quickly and carefully. By analyzing the activity and presenting the findings in clear business terms, I helped executives understand both the scale of the issue and the urgency of addressing it. That led to the implementation of a cloud access security broker (CASB) solution, which significantly reduced the organization’s risk exposure.


 Leadership in high-stakes moments is really about influencing through clarity. Many leaders were initially concerned that implementing a CASB might disrupt legitimate data transfers or client exchanges. I worked closely with them to build a strategy that balanced security with operational needs, allowing approved data flows while blocking unauthorized ones. I also emphasized a phased implementation approach, showing that effective change takes time and precision. By combining transparency with data-driven insights, I was able to build trust and lead the organization through a challenging but necessary transformation.


That's really fascinating. And I suppose that strategy piece is the bit that allows you to say to stakeholders ‘I understand your worries, and I'm a problem solver, not a problem producer.’

Absolutely!


I have another follow up from that, because when you were talking about unauthorized cloud sites, I couldn't help but think about  LLMs and enterprise AI usage. Especially companies who might say ‘oh, we want you to use AI!’ and don't really give a clear pathway to what tools to use, so their employees use anything they can.

So I just wondered, do you see that there's a big similarity between cloud security risks and the possible risks that LLMs pose?

Oh, absolutely. I see a very clear parallel between what we experienced with the rise of cloud adoption and what’s happening now with generative AI. The lesson we learned then still applies. Without proper governance, innovation can outpace control. That’s why governance is absolutely key. Organizations need to take an enterprise-level approach to adopting AI and establish clear processes, approved tools, and accountability rather than allowing employees to experiment freely and unintentionally create “shadow AI” environments. When AI is introduced through structured governance, it becomes an enabler of innovation rather than a source of risk.

 

My next question is about insider threats because it's a big focus of your work! You were talking about how you identified people using things in the wrong way. When you think about people rather than just systems as a risk what reflections or lessons have stayed with you?

What I’ve learned about insider threats is that people are often both the risk and the solution. Not every incident stems from malicious intent. Many occur because of mistakes, unclear guidance, or gaps in process. That’s why I always pair strong technical controls with education and clear communication. You have to build a culture where employees feel supported, informed, and accountable. When people understand the “why” behind security measures, they stop seeing them as obstacles and start acting as partners in protecting the organization.

And I really am big on the “why” because once people connect to the purpose, everything else falls into place.


Following on from that, how do you manage people who might feel like you're blaming them for security risks that they're not clued up on in the first place?

Absolutely. That’s such an important point. When people feel blamed, they disengage, and that’s the last thing you want in a security culture.

My approach is to focus on education over accusation. I try to make it clear that governance isn’t about catching mistakes. It's about creating clarity, consistency, and confidence so people can do their jobs securely. When employees understand why something matters and how it protects them as well as the organization, it shifts the dynamic from blame to partnership.

And yes, that directly connects to governance because good governance is as much about empowering people as it is about setting policy.


Definitely! With education, do you aim to take that blame from people whilst giving them accountability, empowering them to solve those problems?

It is a fine line to walk but I’ve found that collaboration and communication make all the difference. In past roles at both EY and Equifax, I partnered with our security awareness teams to turn mandatory security awareness training into something more meaningful. Within those annual security modules, we embedded real examples of data handling, both excellent and poor practices, so employees could clearly see what “good” looks like.

We made sure that every end user understood why these processes matter and how they protect both the organization and our customers. Once people grasp that context, the anxiety around being blamed fades, and accountability becomes something they share rather than fear.

 

That's interesting, thanks for that insight! Moving on now to AI governance and the future of security, my first question is: What excites you most about the role AI will play in security and what scares you the most? 

What excites me most about AI is its potential to move us from reactive to proactive security, helping us identify and mitigate risks faster and at scale. AI gives us an opportunity to anticipate threats before they happen, which completely transforms how we think about protection.

At the same time, AI is reshaping the governance conversation. We have to ensure these systems are transparent, ethical, and accountable just as we expect people to be. Over the next decade, I think we’ll see a real duel between AI-powered defenses and AI-powered threats.

The organizations that will thrive are those that strike the right balance between innovation and trust.


You talked a bit about the ethics of AI there, and I do want to dig a bit more into that. Maybe I'll just let you talk about what you think the biggest challenges to ethical AI are for a minute!

When we talk about ethical AI, one of the biggest challenges is ensuring that systems are held to the same standards of fairness and accountability that we expect from people.

Regulations like the California Consumer Privacy Act emphasize the importance of handling data ethically and that extends directly into how AI systems process and use information. We have to make sure these systems aren’t introducing bias, particularly around sensitive factors like race. For example, if an AI model is used to determine credit eligibility, we need to be confident that decisions aren’t being influenced by race or other protected characteristics. That means continuously testing, validating, and holding AI systems accountable just as we would any human decision-maker.

 

That’s fascinating because I remember quite early in the days of Chat GPT that there was that 50 states thing going around. People had generated an image of each state and that was racially insensitive. I think a lot of people, criticized the data the LLM was taught on, which was fair. However, at certain point is the problem the human data that AI is taught on because it has the same biases? Have you got any ideas on how we get round that?

That’s such an important question and I think the answer starts with process and collaboration. We have to bring the right people together early, like legal, compliance, technology, and governance leaders to look holistically at how AI systems are being developed and deployed.

Right now, only a few regulations around AI exist, and many more are still emerging. That’s why it’s critical to establish strong checks and balances before problems arise. The fear of missing out is driving a lot of organizations to move fast with AI adoption, but speed without structure can introduce unnecessary risk. 

That’s exactly why governance is so essential. Whenever you’re implementing AI systems, you need a governance framework that ensures transparency, accountability, and compliance from the very beginning.

 

I suppose my question alongside governance is: do you feel as though some businesses rushed to implement AI and don't question how it fits or how it can really help? 

I do. I’ve seen a few organizations take a very strategic approach in ensuring that any AI systems being introduced go through proper change management and governance processes. Trust is a key part of that. The companies that are doing it well make sure the right people are at the table to evaluate whether these AI systems are being implemented ethically and responsibly. 

Not every organization is there yet, but I give a lot of credit to those that are taking the time to do it right. They’re setting the standard for how AI should be introduced.


It is great to see some companies really taking stock and making sure they're approaching it the right way. I'm sure if they're working with you, they are doing that.

I want to just go back to asking about your leadership development for a minute. What does leading with integrity mean to you in practice? And how has your leadership style evolved over the years to grow towards that? 

For me, leading with integrity means consistently aligning your words with your actions and building trust through honesty. It’s about creating an environment where people know they can rely on you to do what you say you’ll do.

Empowering teams, to me, means giving them both autonomy and support so they can truly thrive. Over the years, my leadership style has evolved from being more directive to becoming more situational, focusing on coaching and empowerment during steady times, and stepping in more directly during moments of crisis when teams need stronger guidance.

Ultimately, effective leadership is about flexing to the moment while staying grounded in trust, accountability, and integrity.


Thank you for sharing! I want to talk a bit more about you now, as a woman of color in cybersecurity, what barriers have you faced and how have you found ways to navigate or dismantle those barriers? And are there any stories that come to mind that highlight those experiences? 

Oh yes, the barriers have been real. One of the biggest challenges I’ve faced in tech, as both a woman and a woman of color, has been ensuring that my voice is not only heard, but respected.

Early in my career, I often found myself being silenced. Not in overt or aggressive ways, but through subtle dismissals or having my contributions overlooked. I had to learn how to assert myself in ways that made my perspective impossible to ignore, while continuing to demonstrate the value I brought to the table.

Another challenge has been pushing back against the perception that I might be in the room simply as a diversity hire. I’ve worked hard to ensure that my expertise, insights, and results speak for themselves and that I’m recognized for the impact I make, not just the representation I bring.

 

Mentorship and sponsorship seem to play an important role in your journey. Have there been people who have been there for you and opened doors for you? What do you do now to do the same for others behind you?

Oh, absolutely. I’ve been fortunate to have mentors and sponsors who opened doors for me at pivotal moments in my career. Two great examples would be when my first manager in tech advocated for me to be hired full time, and leaders at Ernst & Young who trusted me with global responsibilities early on. Those experiences left a lasting impression, and I see it as my responsibility to do the same for others. Today, I actively mentor women in underrepresented professions, especially in technology. I focus not only on offering guidance, but also on advocacy by making sure that when opportunities arise, their names are in the room. Whenever I know someone trying to break into the industry and someone else who’s hiring, I make those connections.

That’s how we build a stronger, more inclusive culture.


That's great to hear. Digging in deeper to that, as part of your governance and AI framework work, do you have that discussion about the fears around entry level roles being reduced because of the perceived abilities of AI? Is that something that you spend time thinking about in your in your work? 

I absolutely do. It’s something I think about often, because we’re already seeing how AI is impacting entry-level opportunities for young professionals.

AI should never be viewed as a replacement for people. It should be a tool that enhances human capability. The goal should be to use AI to supplement knowledge and skills, not to substitute them.


Let's talk a little bit more about governance. What is one tip you would give to any business thinking about implementing AI, or one thing they must think about before they go ahead?

The biggest thing, for me, is making sure that as you implement AI, you keep security, governance, and ethics front of mind. It’s easy to get caught up in the excitement of the technology, but responsibility has to scale with innovation. Organizations should hold their AI systems to the same, if not higher, standards of accountability and integrity that they expect from people. When you do that, you create trust, transparency, and long-term value, not just technological advancement.


That's really great, Thank you. What advice would you give to young people who are just beginning to think about a career in tech? 

Oh yes, this is a big one for me!

My advice to young people starting out in tech is to embrace lifelong learning. Technology changes fast, and the people who succeed are the ones who stay curious, adaptable, and proactive about growing their knowledge.

When AI began gaining momentum, for example, I made it a point to get involved in enterprise AI initiatives and even earned my AIGP certification in AI governance to deepen and validate my expertise. It’s a reminder that continuous learning keeps you relevant and ready for what’s next.

Start small, take every opportunity to learn, and remember that relationship-building and problem-solving skills are just as important as mastering tools. And above all, focus on communication. The ability to explain your ideas clearly and collaborate effectively will take you just as far as your technical skills, if not further.


We talked about it with you! About how that change into leadership happened for you when you realized how to communicate these risks to people who were outside of tech, right? 

Absolutely. Speaking the language of the business is so important. Technical jargon can easily go over the heads of business leaders. and while that’s improved over the years, it’s still a barrier in many organizations.

That’s why clear, relatable communication is essential. It’s about translating complex risks into business terms business people can understand and act on. And at the heart of that is empathy.


Do you feel anything about being a woman has helped you be better at your job? 

It has. Those experiences of being subtly quieted or having my opinions overlooked have pushed me to work even harder and become more intentional about how I show up. It can be frustrating. I think it would be for anyone, but it’s also been incredibly motivating.

What I took from those moments was the importance of continuous growth. To upskill, to adapt, and to find different ways to get my point across without being forceful has been the result.. Sometimes you have to push a little harder, but the key is doing it with confidence and composure, not aggression.

Those early experiences taught me resilience, empathy, and the value of communication. In many ways, they’ve made me not only a stronger leader, but a more self-aware one.


That's so great. Thank you for being so open about your experience today! Are there any organizations you'd like to plug? We can include them as resources or places to go to find out more. 

The IAPP! That's where I've had a lot of my training, so I do want to highlight them.


As AI and data governance continue to evolve, Jennifer’s advice serves as both a guide and a challenge: to pair innovation with accountability, and to build systems, and teams, that are as transparent and ethical as the people behind them. 

If you’re a woman in tech who is interested in featuring in our tech talent spotlight, get in touch on LinkedIn or through our website.