On the fifth episode of Season VIII: Polarity - IU Edition, Carolyn Hadlock and IU students Evan, Nicolle, and Valeria welcome Jean Camp, Professor and Director of Center for Security and Privacy at the Luddy School of Informatics, Computing, and Engineering.

Nicolle Gedeon, Evan Masten, Valeria Juarez
"We control access to our physical and social spaces in a way we cannot control access to our virtual spaces. There is a huge conflict between individual control of virtual spaces and the ability to make money advertising to us." - Dr. Jean Camp
Carolyn Hadlock:
Today we are talking with Jean Camp, who is a professor at the School of Informatics at Indiana University and the director of the Center for Security and Privacy and Informatics Center at IU. Welcome to the show!
Jean Camp:
Thanks for having me.
Evan Masten:
I know five or ten years ago I didn't hear much about the Informatics School. I always heard about Kelley. During your time at IU, how have you seen them put resources into that school, and how have you seen it change as technology has evolved?
Jean Camp:
When I arrived at the Luddy School of Informatics, there were no computer security classes, and it was just me and three hires fresh out of grad school. And now, for the past 10 years, if you look at their computer science rankings—strictly based on research productivity—we have been in the top 10 for literally the past 15 years. The university has invested consistently in computer security at IU. If you look at the other top 10 programs, they are mostly technical schools. IU has brought a dimension of international awareness that no other school has. Our cybersecurity and global policy undergraduate program is unique in the world—an interdisciplinary program that combines regional or area studies and computer science.
Carolyn Hadlock:
Security. One of the things we saw you wrote about was how you weren’t seen as a narrow subject-matter expert, and that you really saw the importance of bringing the human equation into this space. That seems to have contributed to the school’s reputation and helped elevate it. The idea of interdisciplinary research and teaching is essentially beautiful thinking.
Jean Camp:
Thank you so much. When I originally entered computer security, it was part of national security, and there was a huge tension between security and privacy. Now there is an increasing recognition—if not yet widespread technology adoption—that information leakage is information leakage. It's hard to provide security if everyone knows everything about you. A lot of the attacks we see today are based on the availability of public information—like using someone’s family videos to generate AI-based, highly targeted attacks, or gathering all the information needed to recover an account from someone’s Facebook friends and family. That has been a huge sea change during my time in computer security. I don’t think many other universities would create a cybersecurity and global policy program because it requires recognizing that internet safety is a global issue—one that’s about breaking down barriers and silos. Not just saying, “Oh, you’re in human-centered computing and I’m in security,” but breaking those divisions across campus. I love that about IU.
Nicolle Gedeon:
Going off of what you said about cybersecurity, how does this relate to consumer behavior, and why do you believe studying consumer behavior is important?
Jean Camp:
What we demand of consumers is just not reasonable. Honestly, if someone is a brain surgeon, I don’t want them to know anything about computer security—I want them to know everything about brain surgery and not worry about anything else. So I feel like privacy and consumer safety need to show a lot more respect for the people we’re trying to protect, instead of treating them like “end users” we need to educate and control. We even call them “users.” There’s only one other industry that calls its customers “users”—and that’s not a good industry. Privacy and human-centered design are about respecting the person.
Carolyn Hadlock:
And how are you accomplishing that? We’ve read some of your research papers where you're putting the onus back on system designers. Do you mind speaking about that a bit?
Jean Camp:
Well, the person who is capable of mitigating a risk should be responsible for mitigating it. One of the things we try to do is provide users with the information that the computer and system already have. For example, in order to fall for a phishing attack, you have to believe the website you're visiting is familiar. So before you enter an account or set one up, we built a system that was highly individualized. It would alert you: “You’ve never been to this website before. This is not a trusted website.” And that’s all you really need to know. If your response is, “Get that out of my way, I’m signing up,” then fine—go ahead and create your new account.
Evan Masten:
I see this a lot in retail with trending products. A bio-product gets sold out, and then other companies create a website with the same name or the exact same product, and they try to sell it.
Jean Camp:
And the computer knows that’s not the same website. It knows it’s hosted in a different country, that it’s a brand-new domain name, and that it’s not one your friends have visited.
Carolyn Hadlock:
So you're saying the computer knows, but it’s not flagging it for me. Is that a hardware issue? A software issue? Who is responsible for flagging that, and why is it not happening?
Jean Camp:
It’s an economics issue. Right now, the model of security is that you should accept everything until we know it’s bad. But that’s not how we live our lives. And I think that’s where the big privacy vs. security conflict lies. It’s not that you can’t be secure—it’s that you can’t have the same services or value to the service provider if you are secure. And that’s a deep, systematic conflict.
Carolyn Hadlock:
In the cybersecurity space, we were looking at some of the data, and it seems like what you’re saying—the resistance—is kind of the bottom line. At the end of the day, we know that by 2025, $10.5 trillion in cybercrime will be hitting the global economy. So are you doing work at the national or international level to help build the case for a new economic model?
Jean Camp:
Well, I hope I’m doing that work. One thing I do—and encourage my students to do in class—is respond to proposed policies. There’s something called a “Notice of Proposed Rulemaking.” It’s not the most exciting thing in the world, but it’s how the U.S. government traditionally makes decisions. We’ll write an abstract of our research and upload our papers. There’s an idea that there should be a minimum necessary level of security—a minimum standard—and that any IoT product or home device should have a seal indicating that. One argument made against this is that “no one will pay for security.” So we ran a series of experiments and found that people who care about security will pay for it. Of course, that’s not the whole market.
Nicolle Gedeon:
Going off of that—do you believe we can obtain both privacy and security, or do we have to give up one to get the other?
Jean Camp:
The ability to protect your own information is central to security. We carry these location devices in our pockets all the time. We upload so many photos that it’s possible to construct 3D models of cities, towns, and landscapes just by putting all of these things together. So, we can have far more privacy than we do now without any decrease in security. Right now, we’re giving away so much information that it actually makes us less secure.
Carolyn Hadlock:
Do you see a generational divide in that? Are certain generations more okay with the trade-offs?
Jean Camp:
That’s a complicated question. When I teach older adults, I’ll often say: create a superhero persona. Don’t give your real name for everything. And if you have kids, lie about your birthday—because “mom’s birthday” is a common security question. Frankly, you should remember your mom’s birthday without Facebook having to remind you. So there are a lot of different navigation techniques at different levels. But if you look at phishing resilience, there’s not really a significant difference across age groups—until you’re looking at the very, very old.
Valeria Juarez:
You talk about the role of community involvement. Why do you think that’s important, or how can we work together to combat things like these?
Jean Camp:
I feel that in computer security, we’ve focused a lot on defense and “capture the flag,” but not so much on empowering communities to protect themselves. And this goes back to the model of privacy and security where all your information is uploaded, processed to optimize advertising, and some subset is sent back to you. That model assumes we’ll identify the bad things, tell you what’s bad, and you simply trust everything else. But that’s not how communities work. Different communities trust different sources of information and have their own members. Just the ability to identify that someone selling you a car on Craigslist actually lives in your county is important. When platforms sell that ad, they know the person doesn’t live there—but you’re not told that. So part of what I try to do is look for network indicators of fraud and maliciousness and then provide that to the person making the trust decision.
Evan Masten:
I feel like I’m always hearing about the negative effects of AI. I was wondering—have you been researching that? And how do you think AI can combat these problems and provide solutions?
Jean Camp:
Thank you so much for asking. Well, you should have an AI. There’s a big, global AI detecting bad things, but you can also have your own personal AI that detects what you trust. We built home IoT systems and used AI to see if the servers they were connecting to were normal—if the route the device was taking looked usual or if it looked like someone was spying on you. Then it could pause the connection so you could choose to make a change. There are services—I'm not saying “never connect to anything outside of the country,” but at least know when you’re doing it. We’ve seen subverted devices literally inside people’s homes. Maybe you want to connect to your parents’ house—maybe that’s something you really want to do.
Carolyn Hadlock:
Like a Nest camera. If I’m trying to check in on my mom and make sure she’s—
Jean Camp:
—safe, then yes, you might want to connect to that. But if you’re buying a T-shirt, you shouldn’t be connecting to someone’s Nest camera.
Evan Masten:
I always think about those Nest cameras—if we have access to them, then there must be some way someone else can too. If you’re putting these cameras in your house, where does the privacy lie?
Jean Camp:
Well, you might want to check your terms of service. They’re using a lot of images and imagery for AI training. There was a recent lawsuit about law enforcement access to Nest video camera data, questioning whether that requires a warrant. Because you’re voluntarily sharing that information with a third party, they currently want to cooperate with law enforcement while spending as little money as possible doing it. So, they basically provide an interface for law enforcement to access data.
Evan Masten:
So basically, when you accept the terms, they're just getting rid of their liability?
Jean Camp:
Exactly. A lot of what we think of as privacy violations is really just risk-shifting.
Evan Masten:
That’s definitely how it feels. No one wants to take the blame or fix the problem, but there’s clearly a problem we can all agree on.
Carolyn Hadlock:
I liked what Evan was asking. In general, when people come to you and they’re freaked out about AI, what do you say to them?
Jean Camp:
Oh, I think we can create less harmful AI. But the idea of AI causing some kind of existential crisis and destroying the world? That’s ridiculous. I’m sorry, but if the standard for a new technology is “Hey, as long as it doesn’t destroy civilization, we’re doing great”—that’s not a good standard. One reason I work with older adults and sometimes with people with disabilities is that if you solve the problem for vulnerable populations, you often solve it for everyone. Let’s design for the more vulnerable members of society—then we’ll end up with something that protects most people. Instead of designing systems that require you to have a PhD in computer science just to understand your settings or terms of service.
Carolyn Hadlock:
One thing you said a minute ago really stuck with me—you mentioned AI and having your own personal AI. How does someone go about doing that?
Jean Camp:
Well, that is definitely in the research phase. That is my research—trying to, instead of retraining the global AI model with everything you feed it, have your own local models that alert you when something is different for you. That’s a locally solvable problem, not a globally solvable one.
Nicolle Gedeon:
What role should the government play in making technology safer for others?
Jean Camp:
One of the big changes I’m very excited about that happened in the EU is that software producers now have normal liability.
In the U.S., and for a long time in the EU, software was so new and unfamiliar that if you sold someone a software product and it failed, causing damage, the software manufacturer was uniquely immune from liability. This was because the internet was still emerging—we didn’t know what would work, what wouldn’t, or whether things would scale or collapse. So, developers and the software community were given special immunity under liability law.
That immunity is now gone in Europe, acknowledging that software has matured. If you choose to create vulnerabilities or take risks, and you choose not to test your software, then you should be held liable—just like if I buy a clock and it burns my house down.
Carolyn Hadlock:
It’s kind of like if I buy a car—there’s Carfax. I can look at the Carfax and see the history, and then I can choose whether or not to buy the car. Or if I do buy it, at least I know what I’m getting. That’s what I’m hearing.
Jean Camp:
Yes—you're always going to take some kind of risk when buying a car or using the internet. But you should choose to take that risk. You should be able to mitigate it, rather than being affected by decisions made by developers or technologists who are driven solely by the bottom line.
Carolyn Hadlock:
And is that part of “security by design”?
Jean Camp:
Absolutely. Secure by design and secure by demand. That means if there’s a risk that can be mitigated, you mitigate it. If the risk is inherent to enabling people to connect to the internet, then they should be aware of it and make their own informed choice.
Carolyn Hadlock:
And that’s part of the White House’s effort, right? Which ties into your point—that this is, in a way, a role for the government?
Jean Camp:
Yes, exactly. They have several major programs. One is the Cyber Trust Mark, which will, in theory, be coming soon to an IoT product near you. Another is Secure by Design, which is based on the idea that we know there are a minimal set of best practices that should be followed. We know what bad practices are, and companies should be required to avoid them.
Carolyn Hadlock:
It’s kind of like a Good Housekeeping Seal, in a way.
Jean Camp:
Yes.
Carolyn Hadlock:
Yeah.
Jean Camp:
Consumer Reports and the Better Business Bureau have both been active in this space—especially Consumer Reports.
Evan Masten:
I think a lot of these issues stem from a lack of knowledge. People don’t really understand what they’re doing, or they’re just accepting things. They just want to go to a website, and they’re not reading what they’re seeing. I think what isn’t highlighted enough is the need for media literacy—being taught how to understand and navigate media. A lot of these issues would be resolved if people simply understood what they were doing.
Jean Camp:
Yes. So, we need to make things simpler and integrate that with education.
Carolyn Hadlock:
And where does that education begin?
Jean Camp:
Well, I’ll admit it’s been a while, but when I was a Brownie leader, I created my own local internet safety badge for my Girl Scouts.
Just like when you’re young, you learn not to play with matches or how to stop, drop, and roll—we should have the same kinds of heuristics for online behavior that work for kids. We already do that for physical safety; you can’t just walk into a daycare randomly. We can create safe zones and protections—but right now, we don’t empower people to do that.
Carolyn Hadlock:
Is it that we don’t enable people to do that, or is it expensive and people just don’t want to spend the money on it?
Jean Camp:
It’s a combination. Sometimes people choose not to be secure or private because they think it’s a rational decision, like you said. And sometimes it is—Google search is great and it’s free, but only because we give up our search data.
Evan Masten:
Or people think, “If everyone else does it, why not me?” Why would I be the one affected? If everyone is posting their lives on social media and sharing their location, you don’t think it’ll happen to you—until it does.
Jean Camp:
Sometimes it’s a rational choice to take an online security risk. But sometimes, you just don’t know the risk. And I think that second scenario—where other people are taking the risk and seem fine—is dangerous. Because when those people doget hacked, they’re not necessarily going to post about it. We treat it as something embarrassing.
But let’s be honest: the attackers are often multinational corporations. Data science is science—we can test hypotheses about how people behave. If someone tries to phish or defraud you, you shouldn’t feel embarrassed.
If you read every privacy policy from every website you visited, that would take up your entire life. And those policies can change at any time. You’d have to keep re-reading and staying alert constantly.
So I don’t blame anyone for not reading privacy policies. I believe we need more protective defaults—if a company wants your data, they should have to provide something in return. Because by giving them your data, you're decreasing your own privacy and possibly creating risk for yourself.
In my approach to computer security, I try to encourage people to think about zones of trust, rather than focusing on everything untrusted. If someone asks for your information—like your credit card number—hang up and call back using the number on the back of your card. Tell them to put a hold on the charge and that you’ll get back to them.
And also—let me just say this: the FBI will never call you on the phone. They just don’t initiate contact that way.
Valeria Juarez:
What is your message to anyone who uses the internet?
Jean Camp:
If you’re nervous, take a breath, disconnect, and then reinitiate the connection with a party you know is trusted. And if something happens—if you fall victim to a crime—you’re not the one who should be embarrassed. The person committing the crime should be.
Carolyn Hadlock:
And the more we speak up, right? The more we say, “I was hacked” or “this happened to me.” That’s part of what you’re saying—engaging the community by making it okay to say, “this happened to me.”
Jean Camp:
Exactly. Don’t be afraid to share your story. Let people know there’s a risk—and that intelligent, capable people can fall for fraud.
Nicolle Gedeon:
Thank you so much.
Valeria Juarez:
Yes, thank you for inviting me. Thank you so much.
Carolyn Hadlock:
Thank you so much for listening to this episode.
This episode was created and produced at the IU Media School as part of the Beautiful Thinkers Podcast: IU Edition.
To follow along this season, check out the Beautiful Thinkers Project on Instagram and LinkedIn.
Special thanks to Bella Grimaldi for our music and to the students who researched and recorded this episode: Nicolle Gedeon, Valeria Juarez, and Evan Masten.