Tyler and Amanda spent a few days at the Marketing AI Convention (MAICON) this year and came back energized about AI. In this video, they discuss some of their major takeaways and what they learned at the conference.
Amanda: Hey ,Tyler. Thanks for grabbing some time.
Tyler: Of course. Let's talk about, let's talk about AI and and MAICON.
Amanda: Yeah, we went last week. It was super great. Um, I'd love to kind of maybe just like talk through our top three takeaways each. Yeah. Maybe we can just like go back and forth. What would you say was your number one or number three,
Tyler: Takeaway number one or number three?
I like it. Um, okay, so they're kind of, well, the overall feel was cool about MAICON this year. So I've gone a lot, I've gone a couple years, including some online stuff, but the main takeaway was like the community feel. Yeah. And, you know, we've all been to conferences, all been to events and it's like kind of a little bit more like everyone's just trying to get a claw and get their own thing out of the deal or whatever.
And this whole event felt so much more like a community where, hey, let's help each other through this and get better at it together and share what's going on and yada yada. And obviously you still have to conduct business and there's some competition maybe in the room and all sorts of things.
But for the most part, in the whole, no one really knows what's going on (with AI) because it's all new and everyone's kind of in the same boat. So the community feel of that. And then a specific takeaway within that for me somebody was presenting and kind of talked about working together as a team.
One of the specific things was creating a prompt library. So if you're familiar with ChatGPT and some of the other AI tools, you have to know how to communicate with it to get the outcome or result you're trying to get back from it. So maybe you're writing a question in a certain way, or you're asking it certain kind of guiding things as you're going through it.
But sharing that as a team or maybe keeping like even departmentally or by job function or something like, "This is what I'm using or typing in or adding in and this is what I got from it." Just so that people can all share what they're learning from it. So I thought that was a really cool, super tangible takeaway.
Also part of that community figuring it out together versus feeling like you have to have it all together just to show up. So yeah, I really liked that too. What about you? What's your, one or three?
Amanda: I guess my number three takeaway, 'cause it's not as fun as the other ones, is that there's still a lot of ethical and legislative things that need to be sorted out with AI.
And we all know that, and different countries handle things differently, but instead of being afraid to dive into it, I think the thing that I took away is as humans, I think we intrinsically know the right thing to do. Like, you know, when you're being shady and when you shouldn't do something — most people know that.
So really being thoughtful with it and using it for for good. But still, it's not a person, so you need to check things. There are bias sometimes that are built into AI because the internet is biased. And so when you're using anything that's created by something that's not a human, you need to check it.
So like, are you constantly referring to people in one gender versus the other? What about race and ethnicity? How is that interpreted and represented? So that was just super interesting to me. And then also of course, there's a lot of info around like data privacy and how that's being used.
And so large orgs might be a little nervous to dip their toes in the water, but still to find ways that maybe don't harm proprietary information or customer or client data, ways to be able to incorporate it, to get your teams to start to use it. So, it needs to be handled wisely and well, but it's going to take a long time to get it all sorted out.
But there's some really smart ways that, that you can still use AI. But it just needs to be checked by human.
Tyler: Yeah. Yeah. It was interesting. I liked in that room when they were talking about, and I think there was a specific one, and it might've been like, probably generative AI, maybe image creation or something like that.
And I think one of the examples they used was "hey, create an image of a CEO of a tech company or whatever, and it was all white guys in their fifties. It was kind of like all very, very, the same. And I thought what was cool about it is if you're in that room, it really felt like "oh, wait a second."
That's like, it was almost like a rally cry kind of moment of "no, no, no, hold on. That's wrong." The information it kicks out is wrong. You know, there's bias. Like you kind of all know right away like this is a problem. Mm-hmm. And I think being in those environments is kind of cool because in some ways for me it does give a little bit of a shot of like, oh yeah, yeah.
There is many, the amount that's reported, that's bad going on in the world. There's things that are good going on in the world and being in a room like that, knowing like there's that many people that are fighting for or have that value system, morals, whatever you wanna call it, was really cool.
So when you said that, I was like, oh, that's right. That feeling in the room was really cool when they talked about that and then had the reaction that that people did. So, Um, yeah, echo that for sure. I think it leads into one of my top three, which was that there was a lot that was talked about around like, I think they call them AI councils in organizations.
So think about like a board of directors or a group of people that would help steer you in the right direction. And depending on the size of your company, you might not have enough people to have an AI council or whatever, or, you know, you're all stretched super thin anyway.
But I think one of the things that, my takeaway from that was really just outlining your guiding principles. So what's your guardrails that you stay within or stay between when you're using some of these tools. Because it's one thing if you're going, "Oh, I'm just gonna have it help me write copy on a blog post, or edit this, or check for grammar here" or whatever. And it's entirely different if you're having it, you know, create stock footage for a, you know, B-roll for something.
I mean, at this point we are starting to see different people that are using generative AI. Kind of just throwing it out there and they might think it's funny, but it's having a negative impact on the organizations or the individuals or whatever that might be unintentionally targeted. At which you kind of really do have to pay a lot more attention to that kind of stuff.
And so I think just as individuals and then as organizations and teams having kind of like those guiding principles or values that you would use as filters to kind of what tools you're using. How are you using them? What outputs are okay and what aren't? And just many of those types of things in an organization, I think is super valuable.
So that was one of the ones I took away. Just kinda like that team dynamic values, principles, and then like if you're a larger company, really starting to form like a real kind of AI council to help guide some of those things and oversee it so,
Amanda: I liked that too. And I think it kind of leads into my next point or takeaway which is as larger organizations are thinking through how to implement AI, we've already seen it happen where it's been terrible, where they just decide to use it in an entire department and replace it with AI.
Like it's not the smart thing to do. Don't do that.
When we're thinking through what kinds of tasks that AI takes on, to think of it more as a task support or a task doer, not necessarily a human replacement. Because the kinds of things that AI is really great at accomplishing or doing are things that you've already talked about, like having a prompt library, really specific instructions where we have a really intended outcome.
It's just that there might be a lot of steps to get there and, and we know that a computer can do that. Um, so not helping us necessarily build very strategic plans or things that are, you know, require deep work or deep think, but getting rid of some of the things that maybe are just algorithms or things that are very monotonous tasks.
I liked that because I feel like a lot of people jobs right now — they feel like their jobs are at risk. Like, "Oh my gosh, AI's gonna come in. I'm a knowledge worker. AI does knowledge work. It's gonna take away my job." It might take away some tasks of your job. But then it might enable you to actually do your job better or more fully.
Cassie Koskoff, I think is her name, had a really great talk that she gave and she compared it to thinking versus thunking. Thunking is you're just like, I gotta do this, I gotta do this. And it's very monotonous, boring. But thinking is the deeper thinking of like, how am I going to apply this to fix this problem or to solve it?
And so we should be focusing on thinking jobs and allowing AI to do the thunking jobs. Things that are just monotonous that we just have to do it. We have to, you know, the hard work that we have to do right now to make the thinking happen. And I just thought that was so powerful.
Tyler: Yeah. No, that's solid. That's super, super true. I feel like there was some big lessons or big themes that we're sharing. And then I also think there's like these like little takeaways that — I mean we did this one and you brought it up a couple times and we laughed around it because we were talking about like using GPT and asking, you know, "Hey, explain this using AI."
You know, for me, we were laughing, we were doing golf analogies. Yeah. So can you explain block or blockchain using golf analogies and it was funny what it came out with. And it was actually kind of relatively true of like, "Oh wow, this is actually makes sense."
But I think just so many people I see using all sorts of AI almost like a search engine and just asking it questions, whereas in like, no, no, no, like, maybe flip it a little bit more and have it explain things, write outlines, create strategy. And I'm saying all these things, knowing that I'm one that has been paid for all of those things over the last, you know, number of years.
But I think like that is one of those jobs where you go, well, can it work alongside of you and help improve what you're doing? Or get you to think about it differently or help you explain things easier or whatever that looks like. Um, and so I think, it was for me as you kind of learn how to use some of those tools in tandem with how you're normally functioning, it does allow you to go, "Oh, actually there's a room for this. There's a place for this."
There's a reality where this can exist and makes what we're doing better, more efficient, better impact or more impact, better results, whatever. It's not just an oh my gosh, it's coming for my job and I don't get to work anymore and this is ruining my life. Um, but I think it does require that mindset where you're open to change and adapting to what you're doing and getting in better and improving along the side.
And then there's like a million tools that I think Mike Kaput did 45 tools in 45 minutes or something, and it was like rapid fire. It was, it was awesome actually. Um, but I think one of the takeaways from that session, for me, was how do you evaluate software tools in this AI space and emerging space?
And somebody brought up an example in one of the sessions that I was at and they were talking about how using kind of different filters to evaluate tools and knowing if it's AI or not, but one of those filters being with the advent of some of the efficiencies that can be created with AI. You can potentially not have as large of an organization and create a cool product or great service, but maybe not have as large of an organization.
And they were talking about how that's such a problem for them because they need support. So they wanna be able to reach out to someone. And if it's, you know, so and so both working part-time jobs and they just decided to launch this product, you're probably not gonna get as great a support or development of that product as like a, I'll call it in quotes, "a real company" over here that's really going after it and have a customer success team and many of those types of things.
So I think there's lots of those pickups throughout the week that it was like little soundbites or whatever that helped you kinda go, oh yeah, I'm tucking that away for now. Not just chasing the shiny, shiny marketing message that came out from a software tool, but maybe having some different variables to how to evaluate it.
So I thought that one was cool too.
Amanda: liked that. My last takeaway I think is really simple. It's just that AI might do some of the work but a human is still the artist. And I loved that. That's good thought because humans are still the ones that are going to determine what should get put in the world.
Like, "Hey, this is the image that AI generated that I tweaked, but I selected this one out of the 60 ones that it created because I feel like that one's art." And I feel like that represents this. And so that even if it is something AI-generated, it is still human-championed and usually directed.
So even though some of the heavy lifting might be done by an automation, humans are still the ones that are curating it.
Tyler: Yeah, for sure. No, that one's really good. So all of this, in summary, it still requires humans to operate this at this point. Maybe it looks different in five years, but at this point it still needs a human filter to be able to hit the real world.
Amanda: Awesome. Well, cool. We'll see you later. All right. See you.