Meet the Roundtable Experts

Next

Brain ... Trust?

Stevens professors on AI’s perils and promise.

Throughout Stevens’ history, its mission has focused on preparing students for success in a complex and technology-centric world. Today, no technology looms larger over humanity than artificial intelligence (AI). In February 2024, a group of six professors representing disciplines across the university’s four schools came together to discuss the effects of AI on their fields — and how its perils and promise could change the world. Read on for their thoughts on government regulation, ChatGPT in the classroom, the dying(?) art of coding and more.

Six Stevens professors sit in chairs in conversation.From left to right: Jeff Nickerson, Jacqueline Libby, John Horgan, Philip Odonkor, Lindsey Cormack, Brendan Englot

Editor’s Note: The following AI roundtable discussion was edited for length. View the full conversation in this video.

Learn more about AI research at Stevens


John Horgan at Ai roundtableJohn Horgan

John Horgan (Moderator): I’m going to start by asking each of you to briefly describe how artificial intelligence is changing your discipline. 

Brendan Englot: When I came to Stevens almost 10 years ago, I was not using AI in any of my research, which focuses on autonomous navigation for mobile robots, ground robots, underwater robots, flying robots. Now [AI] touches almost everything I do.  

There’s two particular areas where AI has been especially impactful. One of them is perception, how autonomous vehicles see the world. The other area is decision-making; we try to build high-fidelity simulations of robots and autonomous vehicles and use those to train robots how to make good decisions in challenging circumstances. 

Philip Odonkor: My work looks at data-driven design of energy systems like buildings. I try to understand how we can use data to help them use energy more efficiently. It’s all about decision-making. What AI is really good at is taking a lot of variables and making decisions using those variables at speeds that humans really can’t match. 

Lindsey Cormack: When I think about how AI is changing my field, I think about it in two ways. There’s political science, the discipline, like, how can we ask these questions? How can we learn about things? I think the potential for AI in politics is enormous.  

Then there is the work that is done by political organizations in terms of how they email people, how they try to persuade. All of that is made easier when we have these large models. What are the arguments that might make sense to [individual people]? Then [AI] can craft a really persuasive, pretty individualized algorithmic answer. That’s actually really exciting for the field of politics. 

Jacqueline Libby at Stevens Ai roundtableJacqueline Libby

Jacqueline Libby: I have just started the Robotic Systems for Health Lab, where we’ll be building soft robotics and hybrid soft-hard robotic devices for physical therapy, rehabilitation and perhaps some surgical applications as well. I have a background in robotics, computer science and mechanical engineering. I’m trying to combine all of those to build these embodied forms that cover our bodies like exoskeletons, or “exosuits” as we like to call them when they’re soft, and use AI to design the shape of those as well as to look at sensors that we put on the body, like biosensors, and then use AI to figure out what that data is giving us and how it can better help us. 

Jeff Nickerson: One of the things we look at are the effects of artificial intelligence on labor markets. Right now, artificial intelligence is being used not only as a utility for an information system, but it’s being used to design and build information systems. One of the things I study is how that is accelerating our ability to build, as an example, better software. 

Possible Perils? 

Overall, this is a pretty positive look at AI. Are there any possible problems related to AI in your fields, or more generally? 

Englot: Sure. As much as there’s exciting promise associated with decision-making, we really have to be careful that the systems that we’re working with are trustworthy and explainable to the end users. There are so many opportunities to release new products, to profit from the release of AI to the public. We’re all a little nervous about that, letting AI make important decisions a little too soon and potentially the risks and the failures that will be associated with that. Because I think the public will be expecting a perfect track record.  

The good news, at least, is [developers] are relying on lots of sensors, perception systems, not necessarily just AI, to make good decisions. I think there does need to be a conversation about how those technologies are regulated. 

Philip Odonkor at Stevens Ai roundtablePhilip Odonkor

Odonkor: One area that I think the conversation oftentimes glosses over is just the energy consumption associated with developing these AI systems. If you look at it, a lot of these systems are now consuming as much electricity to rival small nations. At the end of the day, we’re trying to solve climate change, and if we need AI to help us with it, we have to make sure AI is also not directly contributing [to it] or at least coming up with solutions to mitigate some of its impact on the environment. 

Cormack: We have the fears we know, and then we have the fears we don’t. The fears we know are straightforward. You could use a generative AI to make images that are not correct. You could use them to make videos that are inauthentic. The thing that is scarier is the parts we don’t know, which is if we have this overflow of information that is inauthentic and meaningless, then we have the risk that people are more likely to disengage from politics and say, “I can’t trust the systems that are generating all these things.” That’s the part that I lose sleep over. 

Libby: It’s really important that we educate more of our population to be in this intersectionary place between basic math and then basic machine learning, as well as political science or other issues, where they have a fundamental understanding of what the problems are in the world that need to be fixed. It really needs to be interdisciplinary. 

Nickerson: I think there’s a problem related to scaling. The problem with this is that we may get extremely powerful systems, or we may get one very powerful system. I would like to see a little bit more broadening out of different kinds of models, more open-source models, more local models, rather than having them be all in the cloud and all needing thousands of GPUs every minute. 

illustration of a woman writing and a robot arm writing also.

AI in the Classroom 

I teach [first-year students], and I also teach seminars for older students. I’m trying to figure out: “What should I do with these models, with ChatGPT, in the classroom?” 

Englot: I encourage students to use it, and maybe also experiment a bit to find out the ways that it can be useful. I’m optimistic that [ChatGPT] can be really powerful and transformative as an educational tool because everyone has the ability then to have personalized tutoring as they’re looking for reinforcement of all the areas that they didn’t catch when they went to lecture, or when they went to office hours. 

Cormack: I have a class called Judicial Process, and at the end of the term everyone has to do a Supreme Court case and they have to write a brief. Now, we’ve only been together for 15 weeks. It’s not law school. They don’t know how to write a brief. I said, “Go to ChatGPT for your first draft. Go see how it does it for you.” It doesn’t get everything right, and that’s part of the learning, too. They have to pick out what’s right, what’s wrong [and say to themselves] “Based on what I know of this case, let me go do some of my own research.” 

Then they get to a better end result because they had this little shortcut that let them get somewhere they weren’t going to get on their own. As I’m teaching our quantitative social science introduction classes, I am thrilled that we have this as an option for my students who didn’t do computer programming in high school.  

That said, I think something that we all have to be cautious of as educators is making sure that the interpersonal pieces are still there. If we give too much to this, then we sort of lose what it is to be a human with one another. In my classroom, there are no computers allowed. I really force interpersonal interactions, where they say their thoughts and hear other people’s thoughts. 

Odonkor: We can’t go back to a point where [these tools] don’t exist. Rather than restricting them from using them, I encourage them to use it. What I do in my classes is make sure the questions that I’m asking require some level of creativity and reasoning that operationalizes what they’re learning in class. 

Finding a Common Language 

I read a recent interview with Jensen Huang, CEO of NVIDIA. He said that the idea of coding as being the best thing you could major in is over because of ChatGPT and all these really powerful AI programs. When he’s talking to young people, he says: “Specialize — in biology or medicine or politics. Learn something about some field because the AI will take care of itself. The coding will take care of itself.” 

Jeff Nickerson at Stevens Ai roundtableJeff Nickerson

Nickerson: I totally disagree with that. You can have a lot better conversation with ChatGPT or Gemini or Claude if you ask it to write a Python program and then you talk with the model about the program, because now you’re talking about an artifact that’s common between you and it.  

If you don’t understand how to write programs and all the stuff around it, then I think it’s harder to find that common language between artificial intelligence and human intelligence. 

Englot: I think implicit in [Huang’s] comment maybe is a misunderstanding about what a computer science major learns, [which is] how to use many different tools in related ways. You learn how to debug through the use of many different tools to solve problems that underlie all of those problems. There’s also a mathematical model serving as the foundation. All of those things are going to become more important than ever, I think, as we work with AI. 

Odonkor: On face value, I actually agree with him. If you’re just focusing on the coding aspects of things, you might be competing with an AI. If you have specialized in something, then now you can be using coding as a steppingstone to achieve your goals. Then to Jeff’s point, obviously coding comes with a lot more. You learn how to problem-solve. 

Libby: It is important that we have people that really know those basic problem-solving skills so that we can keep on coding on higher and higher levels. For people who are scared that AI will become sentient and start doing harmful things, we need programmers to make sure they understand everything going on inside that black box.  

We also need people who are working in other specialties but also have a good fundamental programming knowledge so they can work in an interdisciplinary way. So they can, for instance, stop racial bias in criminal justice or stop racial bias in healthcare. As this technology becomes more and more complex, you need more and more educators who can keep on creating this knowledge transfer between different groups.   

The Challenges of Regulation 

Can AI be regulated in any meaningful way to maximize the benefits and minimize the downsides? 

Lindsey Cormack at Stevens Ai roundtableLindsey Cormack

Cormack: Speaker of the House Mike Johnson made a task force with Democrats and Republicans. They are supposed to come up with some ideas of what these regulations would look like. This is a field that’s hard to regulate partially because a lot of people who are players in politics themselves don’t have requisite knowledge of how the systems work.  

What it seems like is for the industry, they almost are welcoming some form of governance. I think how it gets done is just really hard because you don’t want to constrain an industry that’s growing. 

Nickerson: Sometimes instead of going right to regulation, I think it makes sense to think about policy, which is a bigger blanket where you can have regulation, but you can also have incentives to make certain things happen. 

Englot: Yes. Whether it’s achieved through regulation or not, one item that was brought up in the recent White House executive order on AI was just the idea of watermarking and trying to identify content that is sourced from AI. I think you can only make the public safer through increased transparency. 

Odonkor: I struggle with this idea of regulating AI. For example, with watermarks. That would tell us that AI touched it, but then to me I just need to crop that image and all of a sudden it’s gone. Are these just band-aids or are they actual regulations? 

Cormack: There could be regulation, though, on input sources. 

Libby: You need to know what the inputs and outputs are. That’s going to help the regulators understand it. It’s going to help educate everyone. It’s also going to help us make sure that those inputs are correct, that they’re fair. That they’re not [violating] people’s privacy. Or another problem could be that the inputs are biased in some way or another. 

Illustration of a robot hand holding a flower in a sphere.

Leveling the Playing Field 

[Regarding] the early promise of digital technologies and especially the internet, they were going to empower people at the grassroots level, make the world more democratic. Is that still possible with AI and all these other technologies that go along with it? 

Odonkor: Yes, I agree 100% in how it can help us bridge divides. What really excites me is AI’s impact in developing regions. If you look at agriculture, for example, in developing nations, they’re facing the implications of climate change — increased pressure for higher yields, disease, all these problems — but they don’t have the tools of modern agriculture that we have; at least they don’t have easy access to these tools.  

Now, just using their cell phones, for example, they can learn about soil health. They can learn about diseases and diagnose diseases. They can learn how to use their resources more efficiently, just using AI to analyze these things.  

All of a sudden, you’re now bridging what would have taken them years and years to learn and to get the infrastructure to make that possible. If we put more money into it and enable people in these regions, they can bridge a lot of the divides we see right now, be it digital, from a health perspective, education. We’re already seeing it in fintech. It’s enabling people who didn’t have banking services, for example, now to have access to banks, secure banking on their phones where they didn’t have that before.  

Libby: I think it’s empowering people who used to not have power to gain power. Growing up and studying computer science and STEM-related fields in a very male-dominated field ... I felt very much like I was in a boys’ club. If it wasn’t for the power of Google, I wouldn’t be where I am today. That was my best friend in teaching me what I needed to know anytime I had a question. That was incredibly empowering for me as a woman. Now you can ask ChatGPT. 

brendan Englot at Stevens Ai roundtableBrendan Englot

Englot: I’m largely in agreement. [But] there’s another side to the coin. Of course, there’s going to be new types of fraud and new types of disinformation. What we’ve found especially prevalent is that AI also is a very powerful tool in the hands of someone who wants to take advantage of others and create disinformation. But with every new wave of technology, that kind of thing happens, and there are additional risks that present themselves. I think the pros outweigh the cons, and there’s largely a positive outlook on the future, in terms of what it means for empowering the average person. 

Cormack: I think it’s probably contextual. Yes, it can be a democratizing, liberalizing force in places where there are democracy or liberalizing forces that want to spread. In authoritarian regions where you have such power disparities, where whoever has access to the best AI technologies [are using them] for propaganda or control or violence, then it’s harder to make the case that it’s liberalizing. 

A Bubble — or Here to Stay? 

Is it possible that what we’re going through right now could be followed by a collapse? Are we in an AI bubble or is this permanent? 

Nickerson: I think it’s permanent. I’m fighting a paper deadline tonight and last night I needed to get some code working. Within an hour, working with a large language model, I did the equivalent of what would’ve taken me at least 10 days previously to do. I think many people with knowledge work will be using this forever, and I think it’s going to get better. 

Cormack: I think we’re in AI spring — AI summer isn’t here yet. I think it’s here to stay and here to grow. 

Englot: To play devil’s advocate, I guess I could argue for why I think it might be a little bit of a bubble, and it’s because our first foundation model makes everything seem possible. All of the imagery and text we have on the World Wide Web, there’s so much that we can extrapolate from that that’s useful and has made tangible improvements in our lives and might continue to for decades to come. I don’t think the next foundation models are going to come easy because we’re going to need to harvest tremendous amounts of data to create them. 

Odonkor: I think now that we’ve already opened that box and we’ve seen what’s possible in terms of decision-making, in terms of reliability, in terms of control, what AI can give us, I can’t see a future where we step back and say, “Oh, this is a bubble. The way we were controlling things previously was better.” I just can’t see it. The optimization opportunities, the efficiency opportunities are just way too large for me to see us taking a backstep.

View the full roundtable video