Part 6/11:
Fairness: Since AI models rely heavily on historical data, there is a risk of perpetuating biases or outdated patterns. This raises questions about whether decisions made by AI are equitable and just.
Expandability and Scalability: The data used to train models must be representative. Otherwise, outputs may be skewed or unreliable, especially when models are scaled across different contexts.
Reliability: Ensuring that AI systems interpret data correctly and produce consistent, trustworthy results is vital. Human oversight remains essential, with curation and validation playing key roles.
He advocates for human-in-the-loop approaches—adding human oversight to machine outputs to catch errors, correct biases, and ensure responsible deployment.