In a world, where interaction with machines is increasing, machines that understand your language are a treat. Which is why when Meta, the parent company of Facebook, launched ‘Meta AI in Urdu’ for users in Pakistan on Monday, there was obvious reason to celebrate. Meta AI is a generative AI assistant that can answer questions, generate images and videos and help with creative tasks across Meta’s apps like WhatsApp and Instagram. The efficiency of AI models’ output relies heavily on clear prompts. In countries like Pakistan where English is not the first language, this move is a step towards accuracy and inclusion. If people are able to communicate with AI in their language, there are less chances of misunderstanding and more chances that the output generated is of greater quality. Meta AI works on Llama, an open-sourced large language model (LLM). If the model is trained on Urdu, it could open up opportunities for local developers and researchers. As Pakistan works on building its indigenous LLM, access to a pre-trained model like Meta’s could help with fine-tuning AI to reflect our cultural, linguistic and contextual realities. Urdu-enabled AI could also be useful for several practical and transformative applications. Users will be more comfortable in using AI-enabled healthcare or education platforms that understand Urdu and give accurate responses in it. Singapore’s recent efforts to build its LLM to incorporate the region’s linguistic diversity prove that language-aware AI is central to digital sovereignty.
But this opportunity also comes with risks that the government has to actively mitigate. First, there should be comprehensive training programmes where people are taught what AI can and cannot do – and to what extent they should rely on it. Second, several use cases of AI indicate that in the future, AI will likely find its biggest role in customer service, where chatbots would either replace or assist human agents. In that context, it is essential for Urdu AI to be mindful of cultural nuances and generate responses that are polite. Since the Ministry of IT was actively involved when Meta launched its Meta Urdu, we hope the authorities to form a congenial working relationship where the company is routinely informed of any discrepancies in the responses generated.
There is another important aspect that we should consider. Meta’s record on language moderation raises legitimate concerns. Certain words, such as shaheed (martyr), which are commonly used and contextually neutral in Urdu, have been flagged or censored in other contexts by Meta’s platforms. For Pakistan, it is critical to ensure that it has a voice in setting, or at least recommending, the guardrails and moderation policies that govern local usage. Also, as people begin using AI-generated Urdu content for their social media or professional communication, ethical and legal safeguards must ensure that the model does not produce inappropriate, misleading or even blasphemous content. In the world of AI, where every other thing is becoming ‘AI-enabled’, it is important to focus on how we could enter the data training process to make the models efficient instead of rejecting them.