Airdate: January 14, 2026
Julie
Rose:
How would you react if a stranger insulted you right to your face?
Adam
Becker:
Sometimes, I would just take it. I mean, how long can they, you know, cuss at
my mother? 30 seconds? 45 seconds? At some point, they get bored of this, and
then they, "Okay, then what?" So, now I'll be like, "Okay, now
are you done?"
Julie
Rose:
Hey, it's Julie. Welcome to Uncomfy, a show about sticking with moments
that challenge us even when they're uncomfortable. And I get it, nobody likes
to be Uncomfy, but I've learned from experience, and you probably have
too, sometimes a little discomfort has benefits if we can stay open and curious
about it. And that's what we're here to explore, so let's get Uncomfy. I'm
joined today by Adam Becker. He's the CEO of HeadOn, which is an
online platform that helps people have good faith conversations about tough
topics. Mostly the platform is focused right now on the Israeli-Palestinian
conflict. It's hosted thousands of conversations between Israelis and
Palestinians and Americans, too, and AI plays a role on the platform that is
quite surprising. So, Adam Becker is with me to talk about all of this.
Welcome to Uncomfy.
Adam
Becker:
Julie, thank you for having me on.
Julie
Rose:
So, I know from your bio that you are trained as a tech entrepreneur, so this
is a pretty fascinating focus, and I understand that it started with your own
experimentation, having conversations like this with strangers online. Tell us
how that went. How, how did you go about that?
Adam
Becker:
Yeah, so, actually, I think I've, I probably have had maybe over a thousand
conversations, just random encounters with people online, face-to-face, as in
video/audio form. I don't really believe in, in text, uh, conversations in this
form, but the audio/video ones have a very unique sort of draw for me, and I
think the conversations there ended up unfolding in a very interesting way. So,
I'll give you some background as to what happened. I grew up in Israel, and I
moved to California when I was 16, and when October 7th hit, this was a couple
years ago now, um, the next day I just, I got on a flight and just moved back
to Israel, and I lived there for, for the following seven months, and as soon
as I got there, I realized that people there were speaking a very different
language, and they were seeing the conflict through a very different lens from
the ones that you would see if you're, let's say in the US, and certainly if
you're a Palestinian, across the fence or across the border, right? So, people
were just living in just completely different bubbles, almost isolated
universes, and it felt like it was almost impossible for them to understand,
forget about agree, but at least understand what the other people are saying,
right? Like, how do they even picture themselves across those bubbles? And so,
one of the first things, I'm not entirely sure even how I found it, but I, I
just went on these, like, chat roulette apps. So, I'm not sure if the audience
is, is familiar with this, but you have Monkey app and Chat Roulette and
Omegle, these are all these different apps, some of them are no longer with us,
that just randomly pair you up and oftentimes based on like, yeah, just with
other strangers that—
Julie
Rose:
Anywhere in the world, about any topic? Like, it's sort of like a speed dating
chat thing, but just for, like, conversation?
Adam
Becker:
Exactly, and even less so topic. It's mostly you just say, "I want to talk
to somebody," and then after five, 10 seconds it pairs you up with
somebody, and you just start having a conversation.
Julie
Rose:
And then you're, there you are, like, video chatting some random person? Wow.
Adam
Becker:
Uh, so I'm looking at them, and they're looking at me, and then we can start
having conversation. Now, very often people just immediately skip 'cause they
want another person, or sometimes it's younger people, they call you ugly, and
they move on to the next person.
Julie
Rose:
Okay.
Adam
Becker:
Some of that is pretty painful, yeah,
Julie
Rose:
to be like voted down off the island immediately. Yeah, I don't know if I could
handle that, but, but they can see where you're from at least, and kind of a
few basics about you?
Adam
Becker:
Exactly, so they see, they see a little flag, which is just mapped to the IP of
the computer from which you're calling.
Julie
Rose:
Hmm. Okay.
Adam
Becker:
So, you could just picture that, you know, on October 8th, October 9th, October
10th, I mean, temperatures are as high as they've ever been, and people see
that the little flag is showing Israel, and immediately they go into extremely
hostile zone, and I could, I mean, they would do, well, first of all, they
would like, it's, it'd be very typical to take the shoe out and just sort of
like, show me how they're punching the camera. Uh, some people actually took
out guns or bullets, uh, and just like, showed me this and then point to my
forehead and be like, "This is coming," yeah, there were people—
Julie
Rose:
And the point was what here, Adam? Just like, what, what did you want to get
out of this?
Adam
Becker:
I wanted to just drive clarity. It was clear to me that it's not so much malice
that they're driven by, but ignorance, and, and I, I think that it's important
for people to be able to have conversations, and I wanted to learn how, right?
And, like, "Are people, are certain people irredeemable? Are they so deep
in ignorance or conspiracy or whatever that you can't even engage with them
productively?" I refuse to believe that that's the situation. The idea
isn't to change anybody's mind, but to open their mind. So that at least they
understand that they have an incomplete picture of reality, and perhaps
encourage them to go and do some more research. Why do I have, I'm not sure. It
might be a bug in my system, but I just, I'm obsessed with trying to do that.
Julie
Rose:
Uh, really interesting. So, you get into this kind of stranger match roulette
situation on these apps, people are immediate, a lot of them are immediately
being kind of aggressive towards you 'cause they can see that you're from,
you're in Israel and assume that you're Jewish, right? So, uh, what, what did
you start doing? Like, what if it, if it went successful, what did it take
usually?
Adam
Becker:
Yeah, so it's, I had to experiment to see what works because, I mean,
oftentimes in, so there's many, many different scenarios, and at some point I
start to kind of, like, see a person and hear how they express themselves and
figure out what works and what doesn't work for them. So, many of the
experiments included things like, well, first of all they would say, "Free
Palestine," and immediately I would say, "Inshallah, free
Palestine." And it would just, they don't know how to react to that. They
just freeze. They're like, "Wait, what side are you on?" And I would
say, "Oh, of course the Palestinians need to be free," and they're
like, "Oh, wow." And I'm like, "Yeah, and the Jews need to be
safe. We have to find a way to solve, you know, both of those problems at the
same time." And they just get stuck and be like, "Huh, that's
interesting." Something in their mental model no longer functioned as they
expected. For some people, that doesn't work. So, some people go immediately
into just cussing, you know, cussing at me, uh, and I've seen that there's
multiple things that work there, too. So, sometimes I would just take it.
Sometimes, I understand they just need a space, it's not personal. They don't
know me, right? Like, my ego's not involved in this, so they can yell and they
can cuss and whatever.
Julie
Rose:
And then afterwards I'd be like, "Okay, did you," I mean, how long
can they, you know, cuss at my mother? 30 seconds? 45 seconds? At some point,
they get bored of this, and then they, "Okay, then what?" So, now
I'll be like, "Okay, now are you done? Are you done now? Can we now move
on? Okay, so now let me, I wanna ask you a question. Can I ask you a question?
I wanna ask you a question real quick. Just, okay, I understand, I got the
point, now let me ask you." And so again, they're not expecting these
reactions, right? They think that they will, that I will hear their profanity
and immediately leave, but I'm just staying with it. I'm staying with, with the
discomfort so long that now they're feeling uncomfortable by having done that.
And then do they usually bail out? They're like, "All right, this isn't
going the way I expected. I'm outta here," or will, are they—
Adam
Becker:
Not as many. A lot of people are very curious and a lot of people are, they
don't really have, they don't know that I can engage with them at a human
level. They're not expecting that. At some point, for some people, I'll be
like, "By the way, can I ask you: did, did you learn this type of, like,
profanity, like, is it from your parents? Is it from your family? Like, who
taught you that this is proper behavior when you meet another stranger?"
You know, and sometimes they'd be taken aback. It could also be that they're
not coming in as hot. They see me, they dislike, they wanna yell or something,
but I see that there is something in them, and then immediately I'll be like,
"Sorry, can you give me just, I, I got, my mom is, she's calling me. She
needs some help," and immediately I'll pick up the phone and sort of
pretend that my mom is on the line, and—
Julie
Rose:
Pretend? You, okay.
Adam
Becker:
Yeah, I'll pretend 'cause she's, she, you know, I, I've been doing, I, no,
hundreds. She calls me dozens of times a day, but I needed hundreds, you know,
so I would, uh, I would, you know, just pretend to talk to her and then I'll,
I'll say something about, you know, I'll talk to her, and then I'll be like,
"Yeah, no, she's, you know, some," just conveying some type of
humanity to them, and it works. It works remarkably. Now, I wouldn't say it
works a hundred percent of the times; oftentimes, it doesn't, but just the fact
that they see that there is some type of familialness, right? I, there's some
duty. Like, there are other values that they also aspire to have, right? My
value isn't, "Go and occupy Palestine," right? My value is now, you
know, "My mom needs help, and I wanna be there for her," and they can
relate to it. They see me all of a sudden as a human. There's plenty of those.
I mean, we can go, like I, I used to—
Julie
Rose:
Yeah, tell me some more things that help to just kind of humanize 'cause it's a
really interesting little hack, um,so what else worked did you find?
Adam
Becker:
Uh, food was, and, and, again, these things are just by accident. You know,
like, at one point, I was just eating something and then I got on, you know, on
the call, and someone was, "Oh, what are you eating?" And I was like,
"Oh, it's a soup. It's a, you know, it's like a pea soup," and
they're like, "Oh, I'm so curious. Can you show me how you make?" And
immediately, I'll take the phone or the camera to the kitchen, and then I just
started practicing that game over and over again, right, where I just, I walk
into the kitchen, and I show them what I'm cooking and sometimes I would just
take the calls from, you know, I'm like slicing tomatoes or whatever, and then
they see, you know, behavior that they are familiar with and that they've done.
I'll show them inside my fridge. Also, I would ask people to show me what's in
their fridge, you know, like, and it's all of a sudden they're showing, they're
revealing something personal and intimate, but it isn't really vulnerable all
that much, right, and so it's almost like safe territory for us to connect
over.
Julie
Rose:
Wow, in, so interesting, Adam. So, so talk to us then about how, how, how your
company now this, um, startup HeadOn, builds on those insights that you
gained.
Adam
Becker:
I realized that it's not enough for me to do these little A/B testing of what
works and what doesn't work, and so I realized that we have to scale these
conversations and that I myself can't do it alone, and nevertheless, the
structures that I've identified of watching somebody engage in a particular way
and drawing some inference and then attempting something and seeing if it works
or doesn't work, that is perfectly frameable for a machine learning, uh, um,
exercise. That is, machine learning models are built for this. AI, in other
words, is taking in context about how you engage, so the first few seconds of
conversation, what the topic is, who's the other people in the room? How are
they, like, what is the dynamic like? What is the chemistry there? And trying
to model what might be the best way out in order to not just reduce tensions,
but to build something that both people or all the people in the group feel is
healthy.
Julie
Rose:
So, on HeadOn, you're matching people up to have conversations. Strangers,
mostly? Similarly to the, like, do they come in? Obviously, everyone right now
is having the Israeli Palestinian conversation in your ecosystem, right, but
you're matching strangers or two people coming in together and picking each
other, or how does that work?
Adam
Becker:
So, they're strangers when they first showed up, and at this point now many of
them, you know, know each other and like each other, and so now it's, we've
been building a community of, of people that, that have these conversations.
Julie
Rose:
Oh, wow, so they're frequent flyers. So, but tell, tell, tell me about the, the
initial, paint for me the picture of, you know, you and I come in and we're on
opposite sides of this issue, let's say, and, uh, both feel really, really
strongly about our opposite views, and, uh, we get paired up in HeadOn, and
then, what? Is there, like, an AI bot that's sort of, like, moderating our
conversation, or how's that work?
Adam
Becker:
Right, so I, I think the best analogy I've been thinking about is kind of like
going to the gym. So, we know that, you know, exercise is, is healthy, uh, we
know it's good to be doing strength training. You can't go and lift something
that's too heavy though, right? Like, you can, you can hurt yourself, uh, and
there's different exercises that appeal to different people depending on their
skill level and strength level, and so what the AI is doing is designing the
exercises and offering them to you based on its assessment of where you are
temperamentally, linguistically, epistemically, you know, cognitively,
socially, how you engage with other people, and it might create various types
of experiences, various types of what we call challenges, and so the challenge
might be something very combative, like, "Debate Israel Palestine with a
person that you, that disagrees with you. You got 60 minutes. Go for it,"
right? Something a little bit more confrontational, that's okay, too.
Julie
Rose:
So, the AI is gonna sort of set the tone. It'll be like, "Hey, welcome to,
you know, Julie and Adam, I see you're on opposite sides. Welcome to HeadOn.
Here's your, here's your assignment for the next hour. Have a, you know, have a
feisty debate."
Adam
Becker:
Exactly. Exactly, so you go in, and you could just picture it like lots of, uh,
like, cards you see. Okay, so these are the 12 challenges that are open today,
and, but you can't actually access eight of them because you don't have the
skill level, or the AI doesn't trust that you'll be able to engage at that
skill level, or the other people in the room won't actually be a good fit for
you, so it won't, the chemistry won't align.
Julie
Rose:
So, first timers have limited choices, then. It learns over time how well I am
at, how well I do this kind of stuff and what my tendencies are, but initially
it's gonna be like, "Here are your two options. Give it a shot."
Adam
Becker:
Yes, and I would say that, even that, however good you get, it doesn't mean
that you will necessarily have access to all of the challenges because from
what we've learned now, it's, it's more important to pair the right people
together rather than have an excellent moderator. The matching is probably the
most important thing. It's who's in the room and what is the topic and when
they're talking about it. So, we have people who are, there's like an older
gentleman, American Jew, and every time he speaks to Palestinian youth, it,
just, sparks fly and everybody leaves frustrated, everybody, and they hate it,
and so the Palestinians tell me, "You gotta kick him off the
platform," and he's telling me, "You gotta kick them off the
platform. This is horrendous," and nevertheless, I have wonderful
conversations with him, and I have wonderful conversations with them. They
shouldn't be in the same room. For them to be in the same room right now, given
their skill level, is like me going to the gym trying to lift a fridge. It's
not, I'm just gonna hurt myself in the process; I probably shouldn't do it.
Julie
Rose:
So, the AI is operating at a meta level. It's designing the spaces for a
productive conversation to take place in the first place, not so much being in
the room helping to turn down the flames. So, what does a successful
conversation look like then, or how do you, how do you say, "Yeah, that
one worked good. Let's do more like that."?
Adam
Becker:
Yeah, so what the AI is trying to do in the first place, like, its, its objective
is to reduce the inaccuracy of the theory of mind. So, for example, when I ask
Israelis what Palestinians think, they consistently get this wrong, and when I
ask Palestinians what Israelis think, they consistently get this wrong.
Everybody is totally wrong, and we have a very faulty sense of others'
perceptions, and I believe that is a crucial problem, uh, and it's one that we
have, it's an existential one because when you misunderstand the other side,
the enemy, you know, fellow citizen, however you describe them, when you
misunderstand them, you're gonna pick the wrong tool for the job, and it's very
likely that you'll pick the kind of tool that makes the conflict even more
intractable, and so what we need to do is have a more accurate, this is just to
make people more intelligent, for you to have an act, an accurate window into
what is motivating people from the other side, and once you do that, there's a
lot that you could do. You could build coalitions, you can drive towards common
ground, you could do various things. That's on you to do. The platform and the
AI are geared towards trying to increase your own theory of mind accuracy of
what the others believe. This is pretty easy to do for an AI, uh, because it
can listen to you and how you describe my perspectives, and it can listen to me
and how I describe my own perspectives and just find the differences and
perhaps even gimme a score. So, if for example, you came into a conversation
and through previous conversations, the AI knows, "Okay, you've got me at
like 50%," but after this conversation, the way you've described my own
positions is now at like a 70%, something happened that was good in that
conversation. It reduced your ignorance.
Julie
Rose:
Adam Becker is the CEO of HeadOn. You can learn more and maybe even join a
conversation yourself at headon.ai. They've actually launched some new
challenges around big topics in the United States, like immigration, so you
might wanna check that out. Adam Becker, thank you so much for your time and
for the work that you're doing here.
Adam
Becker:
Julie, it was my pleasure, and thank you for hosting the podcast, and, uh,
thank you for inviting me.
Julie Rose: And thanks for getting Uncomfy with us today. How often do you have an in-depth conversation with a stranger about an issue that you really care about? A lot of it, at least in my experience, tends to happen in the comments on a social media post, and we all know how unsatisfying that can feel. I'd love to hear if you've found a strategy that works. Email uncomfy@byu.edu or connect with us on social media; we are @uncomfy.podcast on Instagram. Let's keep this conversation going. Uncomfy is a BYUradio podcast. Samuel Benson produces it. The team includes Hyobin Kim and Sam Payne. Our theme music was composed by Kelsey Nay. I'm Julie Rose. Can't wait to get Uncomfy with you again next week.
Comments
Post a Comment