AI & Policy

AI Global Index on Responsible AI

Estimated read time: 1:20

    AI is evolving every day. Don't fall behind.

    Join 50,000+ readers learning how to use AI in just 5 minutes daily.

    Completely free, unsubscribe at any time.

    Summary

    In a session hosted by the Asia Open RAN Academy, the first edition of the Global Index on Responsible AI was discussed, emphasizing the development of AI policy in the Philippines. Despite the presenter's telecom and cyber security background, the focus was the role of responsible AI in respecting human rights throughout AI's lifecycle. The Global Index provides a human rights-based benchmark across technical, social, and political dimensions. It was highlighted that while the Philippines shows promise, especially in non-state actors like academia and civil society, large gaps remain globally in effective AI governance and protection of human rights. The talk concluded with insights into necessary policy moves for the Philippines to align more closely with international standards.

      Highlights

      • The Asia Open RAN Academy discusses the pioneering Global Index on Responsible AI. πŸ“š
      • Focus on creating AI policy in the Philippines, despite no current AI policies. πŸ‡΅πŸ‡­
      • The Global Index prioritizes human rights benchmarks over mere technical aspects. πŸ”
      • Philippines ranks highest in Southeast Asia for non-state actors' role in AI. πŸ†
      • The importance of human oversight in AI systems emphasized. πŸ§‘β€βš–οΈ
      • Philippines' non-state actors like UP and NGOs lead in responsible AI initiatives. 🌟
      • The EU's AI Act serves as a potential model for the Philippines' AI policies. πŸ“œ
      • Significant global gaps in ensuring AI safety, security, and reliability highlighted. 🚫
      • The index reveals nearly 6 billion people lack adequate AI human rights protections. 🌎
      • Call for the Philippines to engage more in international cooperation for AI development. 🀝

      Key Takeaways

      • AI governance doesn’t always equal responsible AI governance. πŸ“Š
      • Countries show limited mechanisms to protect human rights with AI. πŸ›‘οΈ
      • International cooperation is crucial for responsible AI development. 🌐
      • Gender equality remains critically underserved in AI initiatives. ♀️
      • Inclusion and equality issues are largely ignored. 🚫
      • Labor protection in AI-driven economies is lacking. 🏒
      • Cultural and linguistic diversity must be integrated into AI. 🌍
      • Major gaps exist in AI system safety, security, and reliability. πŸ”
      • Universities and NGOs are key in fostering responsible AI. πŸŽ“
      • Most countries are far from adopting responsible AI practices. 🚧

      Overview

      The session by Asia Open RAN Academy brought to light the first edition of the Global Index on Responsible AI, focusing specifically on policy development in the Philippines. Despite being a newcomer in AI research, the presenter shared insights from the Global Index, aiming to bridge technology and policy through a human-centric approach.

        The discussion delved into the need for responsible AI which upholds human rights across the AI lifecycle, from design to deployment. The Global Index measures not just technical but also social and political commitment to responsible AI development, with the Philippines showing progress particularly in contributions by non-state actors.

          The conclusion recommended a firm move towards legislation and international cooperation to fill critical gaps in AI ethics, governance, and human rights protection, underscoring the need for the Philippines to draw from international models and enhance its AI responsible initiatives.

            AI Global Index on Responsible AI Transcription

            • 00:00 - 00:30 [Music] to discuss the first ever edition of the global index on responsible Ai and insights for crafting potential AI policy in the Philippines I am quite new to the AI space my area of expertise as
            • 00:30 - 01:00 some of you may know is telecom and internet infrastructure policy which I've been working on for over 15 years my other field is cyber security policy um in particular the protection of critical information infrastructure so this is actually my debut in AI research through the global index on responsible AI my personal advocacy as a policy analyst is to bridge the understanding between technology and policy which I
            • 01:00 - 01:30 hope we will achieve in this session so um some of the concepts that I'll be discussing with you were already briefly taken up by our AI expert Dr Peter C so for this session uh we will do a deep dive into the intricate aspects of uh responsible and ethical AI using the lens of this pioneering Global study that centers on promoting human
            • 01:30 - 02:00 rights in the age of AI so let's begin so we will discuss first the concept of responsible AI what it means and then we will discuss the results um of the first edition of the global index on responsible AI we will then focus on the 10 key takeaways from the survey and then put a spotlight on the Philippines and how we Faire so far in promoting responsible AI then fin finally we will
            • 02:00 - 02:30 tackle insights that can be applied in crafting AI policies in the Philippines noting that there are several AI related bills already filed in the 19 Congress the current Congress finally we will Zero in on key insights as AI evolves and becomes more and more relevant to our lives at the end of this lecture you should be able to understand the importance of adopting a human Centric approach in designing
            • 02:30 - 03:00 developing and deploying AI solutions to ensure that human rights are upheld protected and prioritized in the entire AI value chain okay so what is responsible AI we Define this as the making sure that the design development deployment and governance of AI is done in a way that respects and protects human rights and upholds the
            • 03:00 - 03:30 principles of AI ethics through every stage of the AI life cycle and value chain this requires that all actors in the ecosystem whether it's government private sector Civil Society organizations the academ are involved in the National um AI ecosystem and taking responsibility for the Human Social and environmental impacts of their decisions so it's not just about us humans as
            • 03:30 - 04:00 individuals but humans uh and groups of people as a society and also the environment where we live the responsible design deployment and governance of AI are proportionate or should be proportionate to the purpose of its use and they should meet the technological needs of the individuals and societies that it seeks to serve so that is the definition of responsible AI
            • 04:00 - 04:30 according to the global index on responsible AI so the global index measures government commitments and Country capacities toward uh the development of responsible AI through or using a technical social and political lens that this is something that's not being done in the present indices out there usually the studies now are mostly focused on the technical lens or technical aspects
            • 04:30 - 05:00 so this moves beyond the usual parameters of innovation uh being done in R&D or Investments being put by governments in the private sector so this index fills a critical uh Gap by employing human rights based benchmarks and it also covered continents and countries that are usually not found in a study about AI so uh between November 2023 and
            • 05:00 - 05:30 February 2024 data was collected firsthand by 138 in country researchers um I was the country researcher for the Philippines and uh researchers completed this uh survey containing 1,862 questions across 98 uh thematic areas uh disigned to ass certain the conditions and actions being taken to advance responsible AI in each of the country that was
            • 05:30 - 06:00 surveyed and then after the survey or data collection was done a global team of quality assessors conducted an exhaustive review of all the data collected the data covered for this first edition of the global index um was from November 2022 to November 2023 so that's a one-year period um any uh government framework um government action and non-state actor initiatives after
            • 06:00 - 06:30 November 2023 are uh not included in the survey okay so this is the conceptual framework uh for the global index it is composed of three dimensions that align with human rights based standards and Democratic principles as uh you can see there it's on capacities human rights and governance three pillars accounting for the AI ecosystem so that's policy
            • 06:30 - 07:00 Frameworks government actions which may not necessarily be policy but it's any initiative by the government and non-state actors there are 19 thematic areas covering the core components of the concepts of responsible AI which we just defined earlier and 57 indicators measuring the performance of each thematic area within each pillar so that that's that's the that's too much information
            • 07:00 - 07:30 I know uh to to to take in right now but um uh to give you an example no so the survey looked into let's say government or national policy Frameworks and it looked at these initiatives uh in each of the 19 thematic areas on responsible AI capacities that you see on your screen the same was done for government non-policy actions as well as non-state
            • 07:30 - 08:00 actors so each thematic area assesses the performance of three different pillars of responsible AI uh so as you can see the same um the it was an iteration of the different actors across the Thematic themes okay so the three dimensions that promote human rights and Democratic principles are the following so the
            • 08:00 - 08:30 governance Dimension mures the degree to which national governance regimes uphold effective and rights preserving practices and responsible AI so it's a national level uh initiative um in the case of the Philippines these would involve the Office of the President the executive branches as well as um the Judiciary particularly the Supreme Court uh for decisions that uh have a national
            • 08:30 - 09:00 coverage as well as uh laws in Congress or passed by Congress the human rights Dimension measures the extent to which countries are taking steps to protect promote and respect Key Human Rights implicated with the use of AI the capacities Dimension looks at whether Key State capacities required to advance not only AI technically but responsible AI
            • 09:00 - 09:30 actually exist and are being met and promoted so each of the three dimensioned uh Dimensions uh in the previous slide is divided into 19 thematic themes or areas that cover the key components of what we consider responsible AI so these indicators are actually based or mainly relied on the UNESCO recommendation on the ethics of AI and the oecd principles on
            • 09:30 - 10:00 AI apart from those Frameworks consultations with stakeholders were also conducted especially in the global South this is very important to emphasize because as as I mentioned many studies focus on developed countries that are actually technologically advanced and have mature institutions that are using AI but as Dr Peter C mentioned earlier the Philip
            • 10:00 - 10:30 um even if we are not as developed Tech uh technologically and we are not um developed as developed financially and economically Filipinos are actually heavy users of AI so this is an important distinction for this particular study okay next so under the governance Dimension there are nine thematic areas we will do a deep dive uh of each of these areas because this is very important in um you know understanding and raising
            • 10:30 - 11:00 awareness about what are the different aspects of uh responsible AI principles and guidelines as well as uh policy Frameworks so let's start with national AI policy policy is a course of action adopted by the government in an official capacity AI policies are important because they articulate a cohesive strategy for how the country intends to deal with the challeng es and opportunities posed by AI so this
            • 11:00 - 11:30 actually policy actually guides the actions of all stakeholders in various sectors as I always say the state or the government is in a unique position because it is the only institution that can put in place policy that covers everybody so National AI policy is important in that respect in the Philippines we don't have ai policies yet what we do have are AI strategies in
            • 11:30 - 12:00 2021 for example and Dr C also mentioned this the department of trade and Industry launched the country's First National AI strategy road map and it was updated this year also the department of Science and Technology launched an AI research and development strategy and I can imagine in the near future that other government agencies that have a jurisdiction of a particular sector or industry would also issue their own strategy or guidelines so whether or not
            • 12:00 - 12:30 this is a good thing uh Dr Peter C mentioned earlier that we usually work in silos that there there doesn't seem to be a comprehensive uh strategy or direction that the Philippines is taking when it comes to AI but for now we are seeing these initiatives and I can say that having a strategy or guidelines per sector is actually a good thing rather than none okay next impact assessment
            • 12:30 - 13:00 when you say impact assessment it is a structured process for considering the implications for people in the environment or proposed actions while there is still an opportunity to modify or even abandon the proposed action if the impact is seen to potentially POS a potential or actual harm so this is important because in the context of AI we can use impact assessment as a tool for predicting the
            • 13:00 - 13:30 anticipated and assessing the actual consequence of an AI system in terms of its benefits or harms to humans so users of AI must consider the type and scope of the impact before and after deployment so impact assessment can be done before and after especially with so many studies now being done by developed countries we can already we already have um tools that we can use to use it yeah know that that is actually the benefit
            • 13:30 - 14:00 of being a you know a latecomer to the to the game no we can see how others are doing it and then we can adopt um and adapt the tools that they use to our own particular context the third uh thematic area is human oversight and determination um this is the act of supervising something like a practice or process by humans this is very important by humans to ensure that it's being done
            • 14:00 - 14:30 correctly so in AI this requires systems to be designed in a way that allows us humans to supervise or control and provide Clear Channel of intervention or interference to prevent or disrupt adverse effects in the use of and deployment of AI tools um this is important because oversight and determination needs to be done by humans not by another machine or Not by another large language model so you cannot have
            • 14:30 - 15:00 other um software or models checking other models because then it becomes a loop who checks the machine that's checking the other machine so human oversight and determination is an important aspect okay next is responsibility and accountability this means being answerable or accountable for something within wants power control or management
            • 15:00 - 15:30 in AI This concerns the humans or decision-making body whether it's a public or private entity to whom a certain outcome may be assigned so the question is who is responsible for what what being decisions outcomes and impacts for example when an AI system hallucinates and this was a question earlier uh uh someone posted this question earlier
            • 15:30 - 16:00 hallucinate meaning it produces output that is not factual which human group or company will be accountable for the error and in the case of the Philippines a country that is a large consumer of AI um we do not manufacture equipment we do not um uh create large language models we are users then who will be responsible when AI an AI system
            • 16:00 - 16:30 hallucinates and if the error results in actual harms to us to Filipinos who will be held accountable in court what if the AI platform that you're using is based in another country is there a way for our local courts to actually um go after the perpetrator and is there a way uh for us to access Remedy or redress for the harm done to us okay so those are very important questions legal questions but
            • 16:30 - 17:00 also very practical ones next is proportionality and Do no harm proportionality is being in it means it's being in proper balance or relation uh Visa the size or quantity degree or severity of something AI systems must not exceed what is necessary to achieve its legitimate aims or objectives they should also be appropriate to the
            • 17:00 - 17:30 context in which they are needed or they were originally designed we often hear about proportionality in data privacy actually even without AI in data privacy we are already made aware of proportionality the data Privacy Act of 2012 requires the processing of personal information that must be adequate relevant suitable necessary and not excessive in relation to a declared
            • 17:30 - 18:00 purpose or uh a specific purpose for example if a company offers an AI powered old age filter such as what you see in the screen right now it must collect just the right amount of data on people's facial or body features because that's what it needs no so that you can your image it can AG your image for you but it should not ideally collect data on a
            • 18:00 - 18:30 person's home for example like what you see in my background which is blurred or the location of the user of that AI tool because that is not necessary for the app's advertised or intended purpose so that is what proportionality means and the collection of data other than the intended purpose can actually do harm that's why it's important to put that in
            • 18:30 - 19:00 check the sixth uh area is public procurement we we're still under the governance um Dimension public procurement as many of you here from the ICT and other government agencies know is the process of purchasing Goods services and works by the government when it comes to AI this focuses on ensuring that governments adhere to International procurement laws and principles as outlined in a
            • 19:00 - 19:30 country's policy framework on AI and this must emphasize Fair transparent inclusive diverse and non-discriminatory procurement of AI systems now you will hear these words these terms uh repeatedly in many of the indicators so in terms of government procuring AI systems uh transparency is very important and inclusivity especially if
            • 19:30 - 20:00 um your constituents or your the citizens that you serve are actually a very diverse group of people in the Philippines we may we may not have different races like in the US or in other Asian countries but we have a very diverse uh group of um indigenous peoples with different languages um and uh we have to make sure that even the marginalized groups are well represented Ed and in government procuring AI
            • 20:00 - 20:30 systems that is a very important consideration so still under AI governance transparency and explainability which Dr Peter C explained earlier so transparency it it seems to be no um intuitive but sometimes we need to remember what these terms mean when you say transparency it
            • 20:30 - 21:00 means that something like a process practice or decision is being done in an open manner without secrets so that people can trust actions to be fair and honest Even in our personal relationship if you're not transparent to your family or your friends it seems that you have a lot of secrets so people will not be able to trust you and they do not see you as honest explainability is the quality of
            • 21:00 - 21:30 enabling people to understand how a particular system works or how a particular outcome or decision was achieved or made by providing information that is number one sensible or Based on data and logic and number two easy to understand so it is not enough that you're able to explain something it has to be in a form that humans
            • 21:30 - 22:00 actually understand this is closely linked to the concept of interpretability or the capacity to explain the factors and logic that led to an outcome in terms again that humans understand uh because the reason why this needs to be emphasized is because other machines can explain let's say what a neural network can do but if it is in a format or language that we cannot understand then that still is not
            • 22:00 - 22:30 explainable and in effect it's still not transparent the inner workings of an AI system must be open and accessible to humans it must be easy for us to understand the explanation of an algorithmic model for example the data that drives these models and the rational for their use however as neural networks become more advanced explainability and interpretability are becoming more
            • 22:30 - 23:00 impossibly challenging a colleague of mine uh whom some of you might know Mr Dominic liot explains that neural networks now have so many layers such that it's practically impossible for humans to understand how the network figured out figured out a certain output so there's a growing field in AI called mechanistic interpretability which attempts to find ways to distill what a network does but then again as I've mentioned earlier that is another
            • 23:00 - 23:30 machine making sense of what neural networks do so we still do not understand them so this is going to become a bigger problem in the in the future in the near future as neural networks advance so we have to catch up and devise a way for humans to be able to interpret what's what these networks are doing next is access to remedy and redress when you say access it's it means
            • 23:30 - 24:00 um legal mechanisms that allow for human rights violations to be thoroughly investigated and AD adequately resolved and it also involves rectifying the harm cost and holding those responsible to account so remedy refers to removal of the harm if there's harm done there should be a way to remove it redress refers to the compensation or reparation of death harm so those are three concepts in interrelated Concepts
            • 24:00 - 24:30 so persons who have suffered harm as a result of the development of an AI system must have Avenues to submit complaints that's one pursue legal actions in court or report issues to a competent Authority and have those harms actually addressed many countries are now revisiting and in some cases updating existing policies such as on Cyber SEC cyber crime prevention data privacy and intellectual property
            • 24:30 - 25:00 as well as consumer protection to factor in now how AI Technologies are actually changing and can continue to change things so for example in 2022 Air Canada an airline company in Canada has a chatbot who gave the wrong information to a passenger about a fair discount so air Fair discount when the passer filed a complaint the airline said the chatbot
            • 25:00 - 25:30 was a separate legal entity that is responsible for its own actions so luckily for the passenger uh the it its Country Canada had authorities uh if the the passenger filed the complaint and the authorities rejected the argument of Air Canada and ordered the airline to pay the passenger a certain amount for damages consumer rights groups see this example as a landmark case
            • 25:30 - 26:00 given that service providers such as airline companies telecoms uh companies and other service providers are increasingly relying on AI and chatbots for Consumer interaction as you can see in the image there chatbot I mean we know it's a chatbot but can we can we actually sue the chatbot who could could we Sue so that is what access to remedy and redress
            • 26:00 - 26:30 means okay safety accuracy and reliability safety means being protected uh from or uh from danger risk or injury so there needs to be Technical Solutions to ensure that AI systems and tools operate safely and reliably and do not introduce new harms on top of of existing harms or
            • 26:30 - 27:00 exacerbate existing risks okay so now we go to the human rights Dimension we have gender equality gender equality means equal rights responsibilities and opportunities so it's not just rights um to people of all gender identities this means having access to the same economic social andl political opportunities as men and boys especially
            • 27:00 - 27:30 for women and girls and persons who identify somewh outside the binary so there needs to be a check against AI tools that are embedded with gender inequalities and biases and when I say embedded with gender inequalities in biases it could mean that the team that actually developed the AI tool already have biases so let's you know let's take a step back and see why would AI
            • 27:30 - 28:00 tools have biases because we as humans in the offline world have biases so we need to make sure that AI tools do not reinforce and amplify discrimination against women girls and non-binary persons due to several factors such as a lack of representativity in the data sets as we know AI uses data that's available uh online and if the data that is being used for a particular model is
            • 28:00 - 28:30 not representative of the rights the responsibilities and opportunities for other genders then we can be assured that the AI tool will also spew out the um results and outcomes and decisions that are not representative of the other genders so again this relates back to a lack of diversity in the offline world in the teams that are designing and producing and making decisions about AI systems okay next is data protection and
            • 28:30 - 29:00 privacy when you say data privacy uh it is the right of an individual to have control over who has access to their personal data and for what purpose personal data is any information relating to you as an individual you are well in legal terms you are the data subject who can be identified from any data Direct or indirectly examples of
            • 29:00 - 29:30 personally identifiable data would be your name your address your age your Biometrics Etc in the age of AI data protection is very important because data protection means regulating the processing of personal data and information and establishing accountability mechanisms when your personal data is misused what is processing no how what um operations um come into play when you
            • 29:30 - 30:00 say data processing first of all the operation can be manual automatic or electronic it can be uh validating sorting summarizing and aggregating data in the age of AI data privacy concerns are probably the same if you think about it they're probably the same as when the internet first became popular so for those of you out there who remember when we started using
            • 30:00 - 30:30 friend to multiply Myspace we were actually already putting our personal data out there so it's the same concerns in the age of AI but the difference is scale according to a white paper by the Stanford University AI systems are so data hungry and lacks transparency that we have even less control now over what
            • 30:30 - 31:00 information about us is being collected what is be what it's being used for and how we might correct or remove such personal information so questions that are critical to ask in the age of AI is our personal information part of an AI model's training data are we actually putting data out there about ourselves and AI models are just collecting them and then processing them without our
            • 31:00 - 31:30 knowledge maybe no data privacy and terms and conditions if we don't read them will chat Bots analyze and summarize our conversations and use them to profile us and then use our information for other purposes do AI powered apps have op in as default for data sharing as far as I know many apps have opt in for data sharing as a default others say it should
            • 31:30 - 32:00 be uh like that no or opt out in many cases is there a regulation against this practice so those are very important questions next we have bias and unfair discrimination when you say bias it's Prejudice for or against an individual or group now as humans have a natur tendency to classify people into groups
            • 32:00 - 32:30 according to certain characteristics in this session alone I was looking at the list of participants and based on your offices or your names I'm already making a you know an unspoken but calculated as you know grouping in my head oh these are from the government oh this name sounds Chinese so these things characteristics are things that people usually used to classify um others into
            • 32:30 - 33:00 certain groups it's a natural tendency we also classify People based on the different levels of power status and income that they have sometimes unknowingly and these biases actually figure into the AI models that we also develop how does this happen it's called algorithmic bias systematic and repeatable errors in AI powered systems that create unfair outcomes which can
            • 33:00 - 33:30 lead to decisions that have a discriminatory impact on certain individuals or groups discrimination is treating an individual or group differently from others because of their age disability and other characteristics discrimination is not necessarily bad because we can use discrimination actually to promote the interests of a particular group what is bad however is unfair discrimination which is treating an individual or group
            • 33:30 - 34:00 of people in a way that is unjustifiable and does not promote their interest bias and discrimination embedded in AI systems can be used to profile people this becomes a huge problem when the profiling by the AI system is used as the sole or only basis for making decisions that can potentially harm individuals for example in law enforce M algorithms May implant
            • 34:00 - 34:30 bias into court environments when judges make decisions about a particular case so in 2016 compass and AI powered risk assessment software gave the following research uh sorry risk assessment score so there's a scoring system to these two individuals that you now see on your screen so as we can see there's a white person on the left which has has been assessed as low risk while the black
            • 34:30 - 35:00 person on the right was assessed as medium risk so the score is three and then six let's now look at their offenses so I am not a lawyer I no court judge but clearly the man on the left committed Graver offenses and should have been considered a higher risk to society this is just one example of how bias can unfairly discriminate
            • 35:00 - 35:30 individuals and therefore must be checked okay next public participation and awareness when you say public participation it means engaging Ordinary People in the decision-making process that affect their lives public is different from just any participation cuz there can be stakeholder particip ation wherein you select the people or individuals that
            • 35:30 - 36:00 you engage with but public participation is actually getting inputs from Ordinary People from citizens now public awareness is the process of informing the general public and increasing the levels of consciousness about the potential benefits and risks associated with a particular decision or action the UNESCO recommends to embed a participatory approach in the the development and use of AI systems that
            • 36:00 - 36:30 there should be General awareness programs especially for ordinary citizens and non tech people or those who are not very techsavvy about the potential advantages and disadvantages of AI okay next children's rights children's rights promote the need to provide children with special care and protection because of course they're still dependent on adults for their survival for their protection and development a lack of transparency in
            • 36:30 - 37:00 the design and deployment of AI tools for children could potentially threaten their rights uh such as right to privacy to play to protection from exploitation and abuse and the right to non-discrimination in today's digital environment and as any of you would with children in your homes would know children are really more susceptible to hateful harmful or offensive content and
            • 37:00 - 37:30 even harmful advertisements no even advertisements that you see on YouTube could be harmful children so AI platforms can be used to abuse children especially since many AI platforms if not almost all of them out there that people enjoy create images so as a safety feature some AI tools like Google's Gemini now make it explicit in their own policy guidelines that it should not generate outputs including
            • 37:30 - 38:00 sexual uh child sexual abuse material that exploit or sexualize children that's just one example of a policy guideline that a particular app um has adopted and I hope as a mother myself many more applications would do the same okay now this is the to me the most important and most urgent discussion uh that we we might have today Labor protection it's the protection of
            • 38:00 - 38:30 employment conditions working conditions and label well labor welfare as well as uh occupational safety health and environmental conditions for a worker everyone has the right to work to free choice of employment to just and favorable conditions of work and to protection against unemployment and this is you know a right that is recognized globally and has been um
            • 38:30 - 39:00 instilled in many national policy Frameworks and laws globally label protection must consider all components in the AI life cycle and value chain what what does this mean so from the moment that an AI company extracts mineral resources to develop AI components such as microchips that use metal to the design phase of AI systems
            • 39:00 - 39:30 and the development and training of algorith algorithmic methods or models there should be label protection in each of those components especially for a country like the Philippines it's very urgent and important to discuss how AI technologies have and will continue to disrupt the nature of work and inflict potentially inflict widespread job loss
            • 39:30 - 40:00 due to automation the world economic forum's 2023 future of jobs report predicts that 25% of jobs will be negatively affected by algorithmic displacement over the next five years a similar study by Goldman Sachs predicted that over 300 million jobs globally will be lost or degraded by the roll out of AI now this is sort of a doomsday
            • 40:00 - 40:30 scenario okay there are many um assessments being done globally um and in the Philippines I think that this is a very important study to have Labor protection is actually the topic of some bills and resolutions currently filed in the Philippine Congress now their main objective is to ensure that humans that Filipinos are not displaced unjustly by companies intending to adopt AI
            • 40:30 - 41:00 solutions to replace humans so we will discuss these bills uh later on okay next is cultural and linguistic diversity when you talk of cultural diversity it refers to the range of ideas customs and social behaviors being shared by different groups of people and it's usually expressed through language AI development entails building machines with humanlike intelligence and
            • 41:00 - 41:30 capabilities so language being an expression of intelligence and capability so when AI development fails to adequately represent the larger diversity of global cultures and then you then subject AI to cultural biases which can raise ethical concerns and prejudices as well as discrimination and harm especially to cultural minorities
            • 41:30 - 42:00 again in the vastness of the AI Universe uh indigenous peoples tribes or those um groups in the cultural minority might not be represented and this can actually create harm I just want to point out going back to labor protection that there are various um thoughts and opinions about about whether AI would replace human and one interesting
            • 42:00 - 42:30 thought comes from Karim Lani who said who famously said AI won't replace humans but humans with AI will replace humans without AI so I can tell you right now that if this uh prediction or opinion were to become true you are you are already ahead of many uh people because you're attending this training and ipping yourself with knowledge about
            • 42:30 - 43:00 AI okay now let's go to the capacities Dimension competent uh competent and competition authorities okay competition Authority is the regulator responsible for overseeing the fair functioning of markets in a country why is this important well as you know AI tools and platforms are being developed being oper ated by big tech companies in many cases
            • 43:00 - 43:30 these are the big tech companies in the stock market you refer to them as The Magnificent 7 so you have these um examples would be Google Microsoft meta Amazon Nvidia um and um I think including uh Tesla as well so you have these big tech companies who have a dominant Market position uh in terms of AI and if AI were to take
            • 43:30 - 44:00 over many aspects if not almost all aspects of Our Lives then you can predict that these big tech companies can actually dictate uh many aspects of Our Lives including uh you know the not only as individuals but even the direction that the country will take when it comes to AI policy so various concerns have been raised about effect of AI and dominant AI companies um an
            • 44:00 - 44:30 important concern is to what extent AI databases uh AI tools of course have databases or data sets how to what extent can they lead to a dominant position uh on the market for example you ask does Google or meta AI have a dominant Market position because of the sheer amount of data that they already have on us Amazon would have you
            • 44:30 - 45:00 know its um e-commerce platform so it it knows you personally it knows what you buy and it knows your your wish list so that is how intimate these big uh tech companies uh know us that's why competition authorities need to be equipped with skills on how to deal um with AI companies okay public sector skills development and this is particularly important for this audience
            • 45:00 - 45:30 right now uh most of you coming from government skills development is about acquiring work related skills and competencies through um Education and Training like the session we're having now in AI skills development refers to the technical knowledge capabilities and proficiencies that are required to successfully integrate AI into certain functions so when you talk about we we always hear the word upskilling
            • 45:30 - 46:00 reskilling this is actually what it means to develop your skill or um gain new skills that will help you to successfully integrate AI into what you do so again if we go back to labor protection it is it does not necessarily mean that you will be replaced by AI well it depends if you're doing something repetitive mechanical that's probably there's probably more likelihood that machines will overtake
            • 46:00 - 46:30 your job but if you are doing something that adds value to the organization let's say that requires critical thinking human relations and uh you know brainstorming uh strategizing these things actually uh can be supported and enhanced by the use of AI tools hence you need skills development so this is related to the concern about ai's impact on labor um since AI can now be applied across a
            • 46:30 - 47:00 wide range of fields and sectors so AI skills is not just about data science machine learning it's not restricted to stem there is actually now a growing focus on increasing AI literacy across value sectors no matter what field you in for example if you are a cook how can AI help you if you are into farming or Manufacturing
            • 47:00 - 47:30 in tourism these are actually sectors where um some analysts have predicted AI would be of most value or use but the question is are we equipping these industries and the people working in those sectors to use Ai and effectively um uplift uh you know their their their lives by integrating AI into their work also the rap rapid uptake of AI used by governments globally demands that basic
            • 47:30 - 48:00 skills such as coding Computing uh programming robotics be fully integrated into skills program in the Civil Service to ensure that Personnel government Personnel understand how to use them and also how to apply them responsibly and ethically last among the indicators is internation AAL cooperation it is a collaborative
            • 48:00 - 48:30 relationship between countries to work toward shared objectives in AI International cooperation refers to Joint efforts between countries to align their policies in terms of AI and ensure that we have Global adherence to responsible Ai and that globally we are looking at inclusive and Equitable access trust and accountability mechanisms are in place and and that we are able to advance scientific research and Technical
            • 48:30 - 49:00 knowledge the last part I would like to emphasize the advancement of scientific research and technological knowledge because for a country like the Philippines where it's investment on R&D and funding for scientific research is not at par with other countries it is important that we cooperate with other countries who are way advanced than us so we can already learn from what they're doing and actually I as I said adopt and adapt what
            • 49:00 - 49:30 research and Tech technical knowledge they already have and apply it to us so no need to reinvent the wheel but we have to make sure that uh the knowledge and applications that we learn from other countries are um appropriate to our context okay so thank you for bearing with me in the Deep dive into the different indicators for responsible AI so so going to the result of the global index on responsible AI we see here a
            • 49:30 - 50:00 map uh showing the results uh based on scores range as you can see Europe North American and Australia fared very well in responsible Ai and got the highest scores based um on the regions countries like the uh Netherlands and Germany got the highest scores overall followed by Ireland the
            • 50:00 - 50:30 UK and the US in Asia Singapore that small dot in Asia and Japan garnered the highest scores the Philippines got the highest score in Southeast Asia second only to Singapore so that's the overall uh result now let's look at the uh sorry let's look at the top 10 countries
            • 50:30 - 51:00 that uh fared uh best in responsible AI so the countries that showed the highest regard for responsible AI according to the index through their National policy their government actions and initiatives by non-state actors come from developed countries from North America Europe and Australia the top four are in Europe that's the Netherlands Germany Ireland and the United Kingdom om the United States is ranked
            • 51:00 - 51:30 fifth European countries occupy the top six to eight spots that's Estonia Italy and France top nine is Canada which is in North America and on the 10th spot is Australia how about in Asia uh in Asia the countries in jurisdictions that made it to the top 35 spots include Singapore and Japan ranked 11th and
            • 51:30 - 52:00 12th India got the 25th spot Republic of Korea or South Korea is ranked 27th and the Philippines together with China Vietnam and Taiwan or Chinese taipe got the 31st to 34th spot Singapore if you notice if you look at the score I don't know if it's too small for you to see but let me point out that Singapore
            • 52:00 - 52:30 Japan India China and Vietnam in Asia fared the best in terms of government actions so their key competency is how their governments are responding to AI meanwhile in the Philippines we did best when it comes to non-state actors that's the purple shading that you see there in particular we have um good initiatives from our universities the academ and
            • 52:30 - 53:00 Civil Society or nonprofit organization so you can see the contrast now countries with the lowest scores in the global index are those in lowincome countries in Africa the Caribbean and Asia and uh we can uh tackle this um issue later on okay so what are the key takeaways from the global index number
            • 53:00 - 53:30 one AI governance does not necessarily translate into responsible AI so having a framework governing AI does not mean that you're actually doing responsible Ai and that you're promoting or protecting human rights countries that performed well in the index were able to demonstrate a wide range of governance mechanisms in including sector specific policies which I discussed earlier and
            • 53:30 - 54:00 legislative Frameworks to safeguard human rights and to advance responsible Ai and use but the key is how they are being enforced how the government the state is actually implementing uh programs and how government initiatives are effectively safeguarding human rights so that's the first takeaway second mechanisms ensuring the protection of human rights in the context of AI are
            • 54:00 - 54:30 still limited few countries have mechanisms to protect human rights at risk from AI such mechanisms such as impact assessment to measure the real and potential harm of AI systems access to redress and remedy whenever harm occurs and public procurement guidelines which oftentimes include the use of AI in the delivery of social economic rights and Citizen Services are
            • 54:30 - 55:00 still very limited in some countries okay International cooperation is an important Cornerstone of responsible AI practices so across all regions International cooperation was actually the highest uh highest score in uh all the Thematic areas demonstrating the foundations for Global solidarity the majority of countries assessed were able to demonstrate that they have activities around International cooperation which is uh
            • 55:00 - 55:30 very significant especially for developing countries whose institutions whose um R&D and um whose practices are not yet that established when it comes to um responsible AI so this needs to be leveraged especially by developing countries like the Philippines okay next gender equality unfortunately remain a critical Gap so despite a growing awareness of the importance of gender
            • 55:30 - 56:00 equality in AI um it is quite concerning that most countries have not yet made significant efforts to promote it it was actually one of the lowest performing thematic areas of the index only 24 countries assessed had government Frameworks addressing the intersection of gender and Ai and I'm very happy to note that the Philippines di one of its first initiatives was to actually look at ethical AI uh um through the lens of gender equality and protection um but
            • 56:00 - 56:30 apart from that non-state actors globally not just in the Philippines are actually showing greater activity in this field particularly Civil Society organizations and academic institutions uh next key issues of inclusion and equality are not being addressed so few governments consider this at all um as a priority this relates to rights of marginalized or underserved groups so this particular
            • 56:30 - 57:00 thematic area actually performed among the lowest um so equality and inclusion such as including gender equality labor protections and Right to Work bias and unfair discrimination and cultural and linguistic diversity need to play a more crucial role um in our development and deployment of AI workers in AI economies are not
            • 57:00 - 57:30 adequately protected uh if if some of you're not aware we actually have a lot of workers already using AI so any plat uh any worker in the platform economy they are already being subject to AI so few countries are actually ensuring the existence of Labor rights to protect labors and employees um as the use of AI increase in the workplace and efforts to up skill workforces do not correlate again efforts to upskill workforces do
            • 57:30 - 58:00 not correlate with sufficient labor protection for workers whose jobs might be risk at risk of being displaced from AI even if companies um uh you know get AI training for their employees or of Skilling it does not mean that they are being protected it is one step if not the first step but making sure that uh jobs are protected
            • 58:00 - 58:30 or that um humans will not be unjustly uh replaced or discriminated in the work in the workplace is important number seven um responsible AI must incorporate cultural and linguistic diversity there needs to be a check on whether there's an imbalance in the current AI models particularly when it comes to large language models because if used responsibly AI can actually help
            • 58:30 - 59:00 promote diversity and protect low resourc languages and cultural heritage okay number eight there are major gaps in ensuring the safety security and reliability of AI systems among the 138 countries that were surveyed only 28% have taken steps to address the safety accuracy and reliability of AI systems and only 25% have government Frameworks in place to
            • 59:00 - 59:30 enforce technical Safety and Security standards including data privacy and cyber security standards for AI this is a uh you know this finding is deeply concerning because we need to make sure that you know there is technical Integrity of AI on a global scale and remember as consumers of AI we Filipinos are at risk of this if countries that actually develop and design and deploy
            • 59:30 - 60:00 AI systems um are not taking safety data privacy and cyber security into consideration next universities and Civil Society play a crucial role in advancing responsible AI some of them especially the universities are actually taking the lead um in terms of um promoting um responsible AI especially um you know in data collection data processing and in
            • 60:00 - 60:30 promoting inclusivity and finally there's still a long way to achieve adequate levels of responsible AI worldwide so despite the proliferation the actually the fast evolving ch um changes and advancements in the development of AI systems the majority of countries around the world are far from adopting responsible AI 67% of the world's countries in the
            • 60:30 - 61:00 survey scored up to 25 out of the 100 points in the index and a further 25% between more than 25 and 50 up so suffice to say that most countries um that were surveyed you know they they scored 50 and Below out of 100 so that's a very I would say alarming um
            • 61:00 - 61:30 finding this means that nearly what is the impact on us on on people nearly six billion people across the world are living in countries that do not have adequate measures in place to protect or promote their human rights in the context of AI nearly six billion people and how many uh you know million Filipinos are affected by this okay let's put a spotlight on the Philippines so compared to other Asian
            • 61:30 - 62:00 countries in the top 35 spot in the global index the Philippines fared very well in terms of non-state actors in particular the academ and Civil Society organizations are the bright spots in the country um let me give you some examples the University of the Philippines um has issued principles for responsible and trustworthy AI the dlsu has conducted a study on ethical technology assessment of AI the results of the study um will be
            • 62:00 - 62:30 used to reflect on the question of what level of autonomy or control should be provided to the affected individuals concerning the use of AI and iot without compromising their legitimate purposes so studies like that we need more studies like that to be funded and to be made public Theo um established its gen task force and one of its functions is to examine Ways by which the university can train their students to be ethical and
            • 62:30 - 63:00 Humane users of AI nonprofit non-government organizations um are also a bright spot in the Philippines one example is the Ambit Philippines which aims to um Empower nations with share development goals um and those who are disenfranchised populations to actually advocate for the interest in the development and use of AI the foundation for media Alternatives
            • 63:00 - 63:30 or the FMA um has been looking into algorithms of abuse such as addressing online gender-based violence in the age of AI and according to the FMA one of the most serious risk of AI is its ability to facilitate new types of online gender based violence so as AI models enable impersonation you know easier way to hack your accounts or stalking and cyber
            • 63:30 - 64:00 harassment it is so easy much much easier to um proliferate online gender based violence okay now we go to the last two slides insights for AI policy in the Philippines so many countries are contemplating on moving from just having just responsible AI principles and guidelines moving to actually passing legislation to institutionalize AI
            • 64:00 - 64:30 safeguards the EU artificial uh intelligence Act is one example of a law that prescribes very actually very stringent rules and prohibits certain AI powered activities based on uh risks in the Philippines there are several bills and resolutions that are focused on AI and these initiatives are mainly focused on governance and the creation of a government body who will have uh an oversight uh function over AI some of them wish to regulate AI um
            • 64:30 - 65:00 several of them intend to promote labor protection and one um promotes education so very briefly Let's uh talk about the governance related bills so HB 796 is about establishing What's called the artificial intelligence Development Authority or Aida which will over seee the development and deployment of AI Technologies and ensure compliance with
            • 65:00 - 65:30 AI ethics uh principles and guidelines and protect the rights of and Welfare of individuals that will be affected by AI next we have uh two bills from representative Mican so HB 7913 proposes the creation of an artificial intelligence board or Aid which like the Ida will oversee the development applic and use of AI systems but the aib will have Regulatory and
            • 65:30 - 66:00 supervisory Authority it can conduct investigation impose penalties for violations and initiate administrative or criminal actions against offenders so that's the main difference uh HB 7983 proposes to establish a national Center for AI research or and care which will be attached to the the department of Science and Technology and head to be headed by a board composed of uh key
            • 66:00 - 66:30 members of relevant um government departments so this end care actually there's already uh such a center um launched by the department of trade and industry but this particular Bill proposes that the endare be attached to the do and next on the governance related bills is HB 10385 it this bill by repr by the rila
            • 66:30 - 67:00 family representatives from the rila family actually um intends to establish an AI Bureau within the dict so it's not a separate uh government entity but it's a bureau within thect it intends to develop an AI a national AI development and regulation strategy uh conduct R&D formulate governance Frameworks and prevent um worker displacement and monitor
            • 67:00 - 67:30 compliance uh in the protection of Rights uh for those who will be affected by AI now let's go to the regulation bills HB 10567 by representative El reia ferte um proposes to extract accountability and transparency in the production or distribution of deep FES in the Philippines to prevent threats from its exploitation or misuse so we've been hearing about The Fakes and this is
            • 67:30 - 68:00 especially relevant today with the upcoming 2025 elections uh the dict secretary secretary Ivan oui said that the government is aware of these de PES and that it is actually working with AI providers such as open AI chat GPT and Google to address de PES but then we go to our question earlier in terms of jurisdiction access to remedy and redress how can the Philippine government make sure that whenever there are offenders
            • 68:00 - 68:30 there are deep fakes that are that can cause confusion and misinformation and disinformation among voters how can we make them accountable and what is the legal mechanism that we can use okay on labor protection so far we have two bills one is by representative atti HB 9448 this bill prohibits employers from making decisions solely based on
            • 68:30 - 69:00 recommendations or results generated by AI so the the operative word there being solely so employers cannot just make decisions on employment based on solely on AI this is to prevent replacement of human workers and displacement as well as loss of job security or reduction of salary p no you may keep your job but then you your salary and your benefits will be reduced because many of the things you
            • 69:00 - 69:30 can do can actually be performed by AI more effectively and faster so it costs less for the employer so these are you know very important very relevant issues that the bill purports to examine however I would as a policy analyst caution against just jumping into prohibition or regulation as Dr Peter C said earlier we should not rush uh you know making conclusions about
            • 69:30 - 70:00 this this there needs to be a balance there needs to be a middle ground such that we also do not stifle Innovation and we also do not prevent um the private sector from adopting Innovation especially with the call now for digitalization across the different sectors okay HB 10460 by representative suan allows employers to terminate or lay of employers due to the insulation of AI but provided that there are
            • 70:00 - 70:30 certain standards to be met um finally we have um HB 10751 which seeks to establish a generative AI in education Council or gek or which per purports or um wishes to oversee and guide the integration of jna into the Phil Philippine education system which of course would require consultations with various stakeholders
            • 70:30 - 71:00 especially Educators students and parents that's very important to gather inputs in the formulation of this regulation okay finally we are down to our last slide what are the three key insights that we can get from the global index on responsible AI many efforts to promote responsible AI are already embedded in government
            • 71:00 - 71:30 broad government AI strategies but they lack specific measures related to Human Rights this trend highlights the need for comprehensive policies recommendations and guidelines based on human centered approach particular attention to Human Rights and the indicators the 19 thematic areas and indicators that we discussed earlier should be taken into consider ation now I do understand that not all of them can be equally applied but we need to look
            • 71:30 - 72:00 and assess the gaps where are we lacking the the skills the institutional capacity and the resources and then from there we can probably um zoom in and put more resources next the measurement of responsible AI must take into consideration the responsibilities of actors across the entire AI life cycle and ecosystems
            • 72:00 - 72:30 so this includes government actions government actions Beyond establishing a policy framework again Dr petery mentioned this we have so many good laws out there very well crafted very well written but the action that happens after is the most important part and also through implementation of these Frameworks we can evaluate if we are being effective and based on the
            • 72:30 - 73:00 implementation we can give a feedback back to the policymaking cycle and then enhance update revise these policy Frameworks as necessary so it's really a cycle but the important thing is after after the policy we have to implement so that we can progress finally as International cooperation on responsible AI is an area of shared
            • 73:00 - 73:30 commitment between countries around the world there's a key lever for strengthening the role of global communities in terms of collaboratively monitoring responsible AI progress in practice and as I've been emphasizing um from the start the Philippines needs to focus its energy also on International cooperation we might not have the skills the resources the maturity of Institutions
            • 73:30 - 74:00 right now but with the help of like-minded democratic countries we will be able to um improve our AI posture and equip ourselves with the necessary um not only resources but mindset and awareness about how to promote responsible AI [Music]