Recent research in the UK suggests that automated checkouts are causing elderly people to be more lonely and socially isolated because quite often going to the shops and chatting with the staff is the main daily social interaction that elderly people who live alone get. I think the need for human interaction is going to mean that many jobs in service-based industries will never be completely automated. Even if it gets to the point where all the actual work is done by robots/AI, companies will still employ humans to talk to their customers/clients. Maybe on board a spaceship in the future, there'll be a human relations officer (or similar title) whose job it is to ensure that people get human interaction where they want it.
Personally, I hate talking to robots on the phone and I also hate automatic checkouts. I realise that the reason why I hate automatic checkouts is mostly because I'm too lazy to scan all my items myself. I'd rather have a friendly human do that and help me with packing. You know, lazy + preference for social interaction. My reason for hating talking to robots on the phone (their total incompetence) may not be an issue in the future, however given that researchers into sentience in animals have found that certain decision making abilities require the animal to be sentient, if you have a machine that's that good at human interaction and making certain kinds of decisions, it would be sentient and therefore require rights, rest time (the need to sleep is a consequence of being sentient, according to the aforementioned researchers), holidays, trade unions etc, and even want a salary - it may end up being more cost-effective just to employ humans. Any function that requires a sentient being to do it, you're going to have all the baggage that comes with employing sentient beings (organic or machine).
BTW I have a science fiction idea on the backburner related to this, i.e. POV character is a sentient robot that wasn't designed to be sentient and isn't treated as sentient, but is sentient because AI programmers and evolutionary biologists never shared that much knowledge with each other. I have other stories to finish before I start writing it, but that's the general idea. If you program an AI to have certain capabilities you may be inadvertently making a sentient being. Sorry for the vague language about "certain capabilities" I can't remember all the finer details and as this research is in its infancy, I don't think the researchers know in exact detail which abilities require sentience and which don't. IMO it's a question that ought to be answered before AIs get that much more advanced. Hence the idea for the story. Sentience has evolved separately on 3 different branches of the tree of life*, including on the insect branch. It's not just a question of animal rights but potentially a question of AI rights.
*IIRC they are land vertebrates (birds, mammals and reptiles), insects (can't remember if all insects or just some) and the squid/octopus clade. Though having seen footage of a species of fish solving complex problems, I'm inclined to think it's in more animals than just those...
So yeah, for the spaceship, roles relating to ensuring that humans get enough social interaction, and the kind of social interactions they want and when, and roles relating to meeting the needs of sentient AI.