Join 50,000+ readers learning how to use AI in just 5 minutes daily.
Completely free, unsubscribe at any time.
Summary
In the evolving telco sector, AI's prominence demands a focus on ethics and governance, impacting telecommunications and many other sectors. This transcript explored the complexities of AI use and misuse, highlighting critical principles for responsible AI development, such as inclusive growth, fairness, robustness, accountability, and transparency. Real-world examples, like Amazon's biased recruiting tool, underpin the discussion on ethical AI use. Challenges such as algorithmic bias and transparency in AI systems pose significant governance hurdles, spotlighting the ongoing need for global and local regulations tailored to emerging technologies.
Highlights
AI ethics in telecommunications is vital for modern life. ๐ถ
AI misuse is a concern across educational and professional spheres. ๐
The Philippines ranks high in AI usage compared to larger nations. ๐ต๐ญ
AI must balance benefits with ethical governance principles. โ๏ธ
Real-world AI failures underscore the need for responsible design. ๐ง
Key Takeaways
AI's role in telecommunications is rapidly growing, making its ethical use more crucial than ever. ๐ฑ
Responsible AI development involves principles like inclusive growth, fairness, and transparency. ๐
Real-world cases like Amazon's biased AI tool show the importance of fair AI systems. ๐ข
Transparent AI systems foster trust but present challenges due to their complexity. ๐
Countries are urged to develop and regulate AI with a focus on equity and accountability. ๐
Overview
In the contemporary telco landscape, AI's integration has sparked essential discussions around ethical use and governance. As AI becomes ingrained in telecommunications, industries must navigate the delicacies of balancing technological advancement with ethical standards. This includes grappling with fair use, preventing misuse, and understanding the broader socio-economic impacts of AI, particularly in smaller nations like the Philippines, which notably ranks high in AI usage.
The transcript dives into significant ethical principles that should guide AI development, such as inclusive growth which ensures AI's benefits are equitably shared across societies, and fairness which aims to eliminate algorithmic biases that lead to discrimination. The concept of human involvementโor 'humans in the loop'โis emphasized to mitigate the potential risks that fully autonomous AI systems could pose.
Real-world case studies highlight the consequences of neglecting these principles. Amazon's AI recruiting tool, which showed gender bias, is a prime example. Furthermore, challenges in transparency and accountability are brought to the forefront, illustrating how explainability in AI decisions remains elusive but essential for building trust and ensuring robust, ethical AI governance worldwide.
AI Ethics & Governance Transcription
00:00 - 00:30 [Music] well uh in the Telco context uh since we're shifting towards more software-based uh telecommunications uh AI will even be more uh prominent so we'll discuss a a fundament Al concern in uh in
00:30 - 01:00 telecommunications in our Modern Life and that is uh AI uh and uh when when we talk about technology inevitably we'll talk about its proper use what it's really about how we uh see ourselves uh the lives we want to lead and that is uh falling within the perview of AI ethics and uh governance so uh as you can see even for
01:00 - 01:30 those School based uh we're pretty much uh inundated by uh good news and bad news with AI uh good use and misuse uh even scientists are not spared uh I see that my colleagues for instance are already using uh Chad GPT to create quiz uh uh if you are a student please raise your hand uh if you're still a student so you have a sense of uh so it's not as if your only students would
01:30 - 02:00 use AI to uh to get past exams or to try to uh overcome uh requirements or yeah fulfill requirements also teachers so um as you can see but but for um the for the Philippines uh as a as a country we are small country relative to uh India United States we're about 110 111 million yeah as you can
02:00 - 02:30 see we're pretty heavy users of AI now what do you think is that good or bad as you can see Japan uh small use and then some others and even Indonesia which is bigger than us 350 million we outrank Indonesia in terms of traffic on chat GPT uh but if you probe deeper uh the problem here is that we're using uh AI to search for information especially chat gbt which is may not which may not
02:30 - 03:00 be the proper use of of AI all right so so what is AI it is a field to uh dedicate to developing systems capable of Performing tasks and solving problems associated with human intelligence most if not all systems that make decisions normally recording human expertise fall within the purview of of AI there's also a conflation of terms uh data science it looks like uh
03:00 - 03:30 uh be becoming less sexy a discipline because of of this conflation but there's an overlap actually you need a robust uh understanding of data scientific understanding of data to be able to do Ai and uh you have machine learning at more deeper uh more more deeply level and you have deep learning uh AI especially with the models we're dealing with now which is which are really is artificial neural Nets no the
03:30 - 04:00 ones popular anyway like chat PT but uh it's not just one area uh there's there's more uh natural language processing uh knowledge representation machine learning computer vision uh speech recognition Robotics and the challenge really is to combine all these and to have a a singular uh contiguous uh uh Services no uh of of AI so uh how
04:00 - 04:30 do we deal with that we will be talking about the principles that govern AI right today we embark on a journey for the values informing the future of AI before we begin let's reflect on a real life story that highlights the importance of ethical principles and considerations in AI in 2018 Amazon developed an AI powered recruiting tool to assist with hiring the tool was designed to scan resumes and identify the most qualified candidates however ever it was later discovered that the
04:30 - 05:00 tool was biased against female candidates the reason it was trained on resumes submitted to Amazon over the past 10 years which were predominantly from male applicants as a result the system learned to favor male candidates and downrank rums with words commonly used by women with that in mind let's take a look at the principles that hopefully can help us be fair and develop AI to serve our best goals and aspirations
05:00 - 05:30 as a people a report on an AI development framework available at ai.org theframe offers a set of value based guidelines covering inclusive growth human centered values transparency robustness and accountability these principles are the foundation of responsible AI development I strongly suggest that you check out this live online doent for a detailed discussion
05:30 - 06:00 of today's topic principle one inclusive growth sustainable development and well-being artificial intelligence plays a crucial role in sustainable development intertwined with our national goals for inclusive growth and well-being as countries Embrace AI it is essential to consider both its advantages and risk mitigating potential negative effects is vital ensuring AI benefits are shared equitably across Society principle two human centered values and fairness fairness is a
06:00 - 06:30 Cornerstone of AI bias in AI systems can lead to discriminatory outcomes affecting various sectors in society defining and evaluating fairness in AI is a challenge but we must ensure AI respects human rights and data privacy rights instead of relying solely on AI robots or automation it's essential to involve humans directly especially for high-risk systems while AI can offer innovative solutions human participation remains crucial to ensure these systems enhance human capabilities rather than
06:30 - 07:00 causing harm ai's potential for Innovation is Limitless but it also opens doors to potential misuse ensuring fairness in the development of AI is challenging but our stakeholders argue that end users should have transparency into ai's decision-making process and the ability to influence results in some cases human involvement is necessary to avoid purely algorithmic decision making ensuring clear human accountability and system audit ability however it's
07:00 - 07:30 essential to recognize that autonomous systems may not always be under human control to some degree therefore we must qualify human involvement in AI systems particularly in high-risk applications in such cases having humans in the loop HL is crucial for high-risk AI systems the EU AI act mandates human oversite to ensure safe and responsible use human involvement in AI goes beyond hit a
07:30 - 08:00 successful approach involves leveraging both human and machine competences in a virtuous cycle to produce valuable and positive outcomes at its core AI must prioritize the protection of Human Rights principle three robustness security and safety building trust in AI requires us to prioritize robust secure and Safe Systems whether it's self-driving cars or medical applications reliability is of utmost importance to ensure safety standards and protect human rights adequate
08:00 - 08:30 regulations and oversight play a vital role while it's essential to acknowledge that the majority of AI systems deployed so far are largely safe it's understandable that people might get fixated on the more dramatic incidents for instance earlier this year there was a tragic incident involving a Belgian man who reportedly engaged in a six week long conversation with an AI chatbot called Eliza about the ecological future of the planet the chatbot supported his
08:30 - 09:00 Echo anxiety and tragically encouraged him to take his own life to save the planet instances like this remind us of the responsibility we hold as AI developers to prioritize safety and well-being recently the launch of open AI chat GPT language model stirred mixed reactions this model showcased its ability to mimic human conversations and generate unique text based on users prompts however this has also raised
09:00 - 09:30 concerns about potential misuse or unintended consequences moving forward it is crucial for AI developers to strive for continuous Improvement in making their products and services safer to use by emphasizing robustness security and safety we can Foster Public trust and ensure that AI technology is a Force for good in our lives principle four accountability a actors must be accountable for their actions and decisions responsible AI involves
09:30 - 10:00 transparency and ability to explain the reasoning behind AI system choices auditability helps ensure compliance with regulations and mitigates potential risk associated with AI the risk of thisinformation has gained prominence recently with the Advent of chat GPT and generative AI consider the case of Brian Hood an Australian mayor H was a whistleblower praised for showing trend discourage by exposing a worldwide
10:00 - 10:30 bribery Scandal linked to Australia's national Reserve Bank however his voters told him that chat GPT named him as a guilty party and was jailed for it in such a bribery scandal in the early 2000s should open AI the company behind chat GPT be held responsible for such apparent disinformation and reputational harm even it could not possibly know in advance what their generative AI would say the question of whether open AI
10:30 - 11:00 should be responsible for this is a complex one on the one hand open AI could argue that it is not responsible for the content that its AI system generates on the other hand open AI could also be seen as having a responsibility to ensure that its AI system is not used to spread this information principle five transparency explainability and traceability transparency in AI policies and decisions is vital for a democratic societ
11:00 - 11:30 understanding AI systems even for non-technical stakeholders Foster truth and informed decision making explainability allows us to identify potential biases and ensure Fair AI outcomes in Singapore it's required that AI decisions and Associated data can be explained in non-technical terms to end users and other stakeholders this openness promotes informed public debate and Democratic legitimacy for AI however the concern of AI systems being perceived as black boxes lacking transparency and explainability has been raised during our stakeholder
11:30 - 12:00 consultations AI systems navigate through billions trillions of variables that influence outcomes in complex ways making it challenging to comprehend even with human attention large language models like chat GPT with trillions of parameters have made explainability elusive even to their own developers nonlinear models further complicate understanding the connection between inputs and outputs despite these challenges developers are working on Solutions more interpret able models
12:00 - 12:30 like decision trees and rule-based systems are being explored techniques such as human readable rule extraction sensitivity analysis and localized explanations are also enhancing explainability additionally detailed documentation of model architecture training data and evaluation metrics and provide valuable insights into AI system Behavior regarding transparency some stakeholders propose focusing on policies and processes rather than revealing AI algorithms entirely this approach acknowledges potential risk as excessive transparency might hinder
12:30 - 13:00 Innovation by diverting resources from improving safety and performance as the European Union moves towards adopting the AI act there's another important principle linked to transparency called traceability traceability is distinct from explainability but equally significant while explainability focuses on understanding how an AI system works traceability involves actively tracking its use to identify potential issues this empowers AI system operators to spot and address risk like data bias and
13:00 - 13:30 coding errors achieving traceability means keeping records of the data used the decisions made and the reasons behind them explainability on the other hand plays a critical role in building user trust and aiding informed decision making it provides a human readable explanation of how an AI system makes decisions both traceability and explainability contribute to the broader principle of transparency however it's important to recognize that transparency alone may not automatically build public trust Professor Anor O'Neil highlighted this concern in her BBC right lectures
13:30 - 14:00 two decades ago noting that while transparency and openness have advanced they have not done much to build public trust in fact trust may have even diminished as transparency increased this Insight remains relevant in today's discussions about Ai and H regulations some stakeholders propose focusing on policies and processes rather than revealing AI systems navigate through billions trillions of variable this approach acknowledges potential r as excessive
14:00 - 14:30 transparency might hinder Innovation by diverting resources from improving safety and performance as the European Union moves towards adopting the AI act there's another important principle linked to transparency called traceability traceability is distinct from explainability but equally significant while explainability focuses on understanding how an AI system works traceability involves actively tracking its use to identify potential issues this empowers AI system operators to spot and address risk like data bias and
14:30 - 15:00 coding errors achieving traceability means keeping records of the data used the decisions made and the reasons behind them explainability on the other hand plays a critical role in building user trust and aiding informed decision making it provides a human readable explanation of how an AI system makes decisions both traceability and explainability contribute to the broader principle of transparency however it's important to recognize that transparency alone may not automatically build public trust professor O'Neil highlighted this
15:00 - 15:30 concern in her BBC right lectures two decades ago noting that while transparency and openness have advanced they have not done much to build public trust in fact trust may have even diminished as transparency increased this Insight remains relevant in today's discussions about Ai and H regulations principle six trust trust is a crucial element in AI adoption AI systems must prove themselves to be reliable and safe especially in applications impacting lives livelihoods earning trust requires
15:30 - 16:00 adherence to high standards and inclusive AI governance we now know that transparency does not automatically translate to trust we need trust to provide space for our Filipino AI developers to pursue Innovation that benefit Society in turn they have to act responsibly and be trustworthy AI research is a public good that needs to be supported by all stakeholders this is where my presentation ends even as we all continue with our journey through AI
16:00 - 16:30 principles for more details check out our report on AI governance framework for the Philippines available at ai.org theframe the values and principles we discuss today are the compass guiding AI featured let's continue to develop AI responsibly ensuring it benefits everyone while respecting human rights and promoting a fair and Equitable Society
16:30 - 17:00 all right uh so uh just uh run through some of the points made there uh inclusive growth sustainable development uh and uh well-being we see that there is uh this is something that is embedded in our Philippine Innovation act uh a burdens and benefits have to be shared uh equitably we also see how AI could potentially bring in uh trill ions of uh of economic activity uh trillions
17:00 - 17:30 of uh of benefits uh valued at trillions of of dollars uh we're also seeing uh 70% of companies would have uh adopted at least one type of AI technology uh right now for the boo industry for instance 60% the last survey are already a uh using AI so if you're headed to Bo most likely you'll be using Ai and uh some other companies in the Philippines as well uh there is increased uh
17:30 - 18:00 productivity that's why in my workplace um it's default that uh my my staff would be using AI so that uh the burden I mean the justification would be on people the Honus on them if they don't uh use AI um however you see that um it's not uh it's not something that is straightforward it's easier said than done um AI as a matter of fact will potentially also bring in added
18:00 - 18:30 dimension of inequity um as opposed to just simply a providing access to um say internet so if you're in tawi tawi you probably would experience uh internet via uh by uh uh starlink and that's fine and dandy however there's an other dimension there that if you are uh going to be using AI um there are going to be additional skills that are expected of you that are required of you algorithmic
18:30 - 19:00 skills uh the your ability to access Fair datab basis and uh the um the capacity to be treated fairly or to um the the right to be treated fairly in those databases and it's quite a leap it's no longer just access to AI as as you can see in the previous slides I had I had a slide on Philippines being on top of uh countries using uh chat GB the problem with our use according to
19:00 - 19:30 data is that our we use chat GPT to look for facts to look for uh certain information uh and those um uh the information the bits and pieces of information could have been hallucination so in other words our usage of AI so far is uh shallow so so that it's a problem when you have to uh think in terms of inclusive growth because even as we have access to AI it's not just access we're talking about
19:30 - 20:00 it's about being able to access properly and that requires more than just uh access no um you also see that uh right now ai is getting to be uh stale in some areas uh it's uh it's people are um not seeing beyond the hype um it it appears that we have a peak in uh uh of there's a peak already of inflated expectations so it's a let down for others for instance if they're expecting to AI to
20:00 - 20:30 do more uh so we could be seeing uh this illusionment already and some are enlightened hopefully that uh when we truly understand AI we are experiencing a plateau of productivity and this is really where it matters most we see beyond the hype and we go straight to uh productivity uh in our workplace I see this happening uh I'm not so sure in in in other areas of of the country no so
20:30 - 21:00 uh as this has been emphasized earlier as well uh human- centered values um treating people fairly um avoiding algorithmic decisions and their discriminatory consequences so if you look at uh algorithms they have the tendency to perpetuate or if not amplify uh existing social economic uh and cultural inequalities uh so that the idea really is to really have fairness uh and being respectful of
21:00 - 21:30 Human Rights and and data privacy um all practically all disciplines all professions are already affected you might think that if you're a hairdresser you would not be affected or a makeup artist you would not be affected by AI but as you can see in this uh in this headline um and a a a makeup artist lost his job uh assessing by with with AI assessing his body her body language so
21:30 - 22:00 um it looks like there's no job anymore that is safe from Ai No at least directly or indirectly uh you see this also uh some some countries um being defensive about about AI uh um but that is already reversed in Italy um they now have access um uh to chat GPT there are also certain areas of concern especially when uh um open AI is uh I mean introduces new uh new version of of chat
22:00 - 22:30 GPT uh you have uh relatively increased risk as well no and in some companies they are uh worry about intellectual property being exposed to um some Trade Secrets being exposed to to Ai and meaning to the rest of the world as well no so uh we have discussed this uh um uh well enough in in the video but but just to point out that this is an ongoing concern uh every time you have a new
22:30 - 23:00 model of AI um there is increased security as I said there is increased safety as well even as you learn from previous models because the more you push the boundaries of AI the more you are exposing yourselves actually to risk accountability is something that is a moving Target as well uh as as uh AI progresses uh that is and as as new domains of applications are being considered new areas of of expertise are being generated in AI
23:00 - 23:30 that is a continuing uh problem as uh discussed earlier on transparency uh is um is something that is almost intractable to to some Regulators because for the simple reason that um systems uh tend to be blackboxes uh and by transparency we mean um you know operations as well of of AI uh that may tend to be inexplainable so
23:30 - 24:00 uh neural nets for instance um there is no um straightforward explanation why input gets to have certain outputs for instance no so uh the interaction of with humans especially when you have uh in learning context in the in in in in relation to for instance CH gbt and other large language models uh the more you put in human elements the more mysterious serous the outcomes become no
24:00 - 24:30 so that is a a problem no and then when it comes to producing context uh you see perhaps uh more recently in the Philippines uh you see your messenger having a a uh an AI uh uh tab already an AI button uh where you can interact with uh uh with an AI agent you see this uh generating images uh that could potentially be misused no I was uh uh
24:30 - 25:00 checking out for instance uh certain images of Jose Rizal and uh um you know uh combining him with uh with certain scenarios and I could see potential for uh for misuse as well no so let me just uh um Breeze through these points because uh we're running out of time uh in a way I'll be sharing the the slides with you and just to point out that uh when you talk about AI governance there are
25:00 - 25:30 many elements of there as well leadership is one if you are in the context of a company or School uh your um overlords your bosses the board of regions or trustees need to be really engaged uh AI ethics to be front and center looking at the core technical elements of of AI this is not something that you will just have to be left just have to be left to the technical people uh you see how this involves in an organization more importantly you have
25:30 - 26:00 to consider the people of your uh in your organization and the culture that is uh um that is dominant in that area you have to look at um risk in terms of uh deciding go or no go for certain AI operations uh looking at operational structures and processes and mechanisms as well uh especially how how um how AI performs in in your organizational context no so I will just
26:00 - 26:30 uh skip of uh this these elements and uh leave this with uh with you um of the link later on uh this has been alluded to uh earlier on in in the in the discussion so um some of one of the last points I have to um to discuss with you would be the human involvement uh scenario or or consideration in AI because if you come to think about it AI is really about autonomy so um this is
26:30 - 27:00 an an area as well that is distinctive of uh say simple data science um AI is always about developing systems that are aimed uh at becoming autonomous and if you consider the notion of autonomy actually uh by definition it's out of control uh from humans or out of human reach no even if you say oh uh I want want to just insert myself there and uh
27:00 - 27:30 take over um over time you increasingly lose control because your aim is to develop autonomy in a way in in machines no so there are um potentially conflicting Tendencies there with human control and autonomy so you have to be qualifying what you really mean by by AI autonomy because as you um as you progress as technology progresses um there is greater autonomy and therefore
27:30 - 28:00 less human control so the idea is um especially for highrisk applications you would need humans in the loop and that is a concept that is uh hard to operationalize actually because you have a long continuous process um and some of these are pretty boring and humans are are terrible at dealing with boredom as a matter of fact we uh um we try everything we have everything to just uh Escape boredom uh possibly including
28:00 - 28:30 from uh boring lectures no so prohibited use of AI um when we talk about human involvement we don't want AI to be applied of as weapon systems uh New Zealand is leading the way uh in in advancing The View that uh we shouldn't be um using Killer Robots uh AIS Killer Robots we shouldn't be looking looking at uh manipulation and exploitation uh
28:30 - 29:00 with the use of AI unfortunately some countries this is more of a norm rather than an exception uh in discriminate surveillance there are societies that are um basically uh dominated by surveillance Technologies um um cameras for instance surveillance cameras and so on but even if we might think that we are free there is actually surveillance go going on there is a book on surveillance capitalism which is essentially monetizing about monetizing our
29:00 - 29:30 activities online so if you use Facebook if you use um um other social media you're pretty much being monitored even as you surf the Internet even as you browse uh sites you are still being surveilled at and the cookies will uh be gathered and uh uh and and certain patterns will be um uh will be determined so so that when you and probably even listening to you no
29:30 - 30:00 sometimes when you have conversation with your friends about certain dress or certain products you'll be surprised sometimes that when you open your internet uh when you open your browser you see a a a an advertisement of the product similar product that you are interested in Social scoring is another area where supposed to be prohibited but this is happening in one country at least uh when you are not doing well on online if you are misbehaving online you
30:00 - 30:30 will not have your passport and you cannot travel because uh you have very low social score so so these considerations um will have to be um put front and center when we talk when we talk about uh AI uh governance now so uh there's there are risk profile in in different areas of the society Criminal Justice System Financial Services Health and Social care social and digital media uh energy and utilities there is a u an
30:30 - 31:00 accounting of of the risk that are involved here uh although I think there are variations uh when we come to the Philippines or when we apply to the Philippines for instance we have higher risk of social media manipulations during elections for instance in the US now uh there is a uh there are controversies around the use of certain images uh use of centered synthetic data uh and you can pretty much uh see how how how they stuck up against other
31:00 - 31:30 other risk of of AI so we can knowing this risk would be um would be a prerequisite to being able to deal with them no so more uh bad news so to speak but we already alluded to this earlier on no uh in Southeast Asia uh there is U it's changing now this is this was last year but uh the recent initiatives they want to come up with with AI regulation
31:30 - 32:00 AI but uh it's not happening anytime soon uh my colleagues are participating I think right now in uh Lao where this is being discussed but uh uh I don't see this happening the regulation of AI in Southeast Asia in in two years not even in two years because uh while there is uh clamour there is stock uh it's a long shot to get this uh um to get into some kind of regul framework that is applicable to all of Southeast Asia so
32:00 - 32:30 right now we're still pretty much a wild wild west uh Philippines Pogo situation uh Thailand Cambodia where Filipinos are human traffic to serve uh in the underbellies of AI in Thailand and Cambodia we see that happening so that is still a a problem now so um top down approach may be a problem um we see our Regulators who are so uh gangho
32:30 - 33:00 about regulating AI but my my discomfort really is that uh they may have been misinformed there is one uh uh one one uh law lawmaker saying that uh AI research needs to be regulated you need to register your research in AI uh I don't think that is a good idea uh so so um I'm we're trying to reach out to that regulator that uh at least provide him with
33:00 - 33:30 proper um expertise no when it comes to to AI there are many unintended anti unanticipated consequences especially for us Philippines we are very good at crafting law without thinking about their uh unintended unanticipated consequences uh so we shoot ourselves in the foot when we do regulation uh for the for the reason that we lack understanding of this technology so um we have to look at various
33:30 - 34:00 Technologies to be able to see in comparative terms of how this may pan out how they're really properly regulated when when uh when people think about regulation they immediately think of boss so Congressman uh that may not be a uh a good practice so you have to look at AI as a range of uh of interventions when you deal with AI so governance uh regulation and legislation so they are
34:00 - 34:30 not the same so um there are discriminatory biases uh if you until now uh if you look at pronouns being used by chat GPT there are stereotypes that are being perpetuated so driver for instance or scientist it's almost always uh a guy so it's going to be a he but uh the reality is that there are more women uh scientists in some areas already uh drivers are no longer just men and so on
34:30 - 35:00 and so forth no so biases are being Amplified by by AI so we have to take a look at our training data which is a potential source of bias algorithmic uh bias is also another possibility that the way we parse data is biased already no so um there's also the general aspect of uh data Pat patrony so to speak uh do we allow our national data to be fed to the large language models of uh of open
35:00 - 35:30 open AI Microsoft Amazon because if that is just what is going on uh then we're pretty much really at the raw end of what we call data colonialism uh as a large contrast here is the effort for instance of France they are trying to come up with their own National large language model based on Lama 3 and it's an ongoing project uh for the government of France uh precisely to combat what we call uh data colonialism where where
35:30 - 36:00 French data or Filipino data uh would just be um a training data for would just be training data for large language models that are owned by uh uh big Tech no and there is no uh no conscious effort to uplift uh the the um the interest of the country so um very quickly uh you see that uh this is the
36:00 - 36:30 technology is really progressing by lips and Bounds uh although I'm not going to say that there's going to be General uh intelligence uh superhuman intelligence but we already see that uh there is um greater uh progress in this area we're now approaching Lama 3 I think the one you're using in your Facebook is already Lama 3 point something but as you can see the way it's by weeks uh it's been estimated um that uh the compute
36:30 - 37:00 requirement for this because there's an energy requirement for for compute and that is the doubling time is 100 days so if you're using 100 Watts now in 100 days uh just to power your AI you will need uh uh 200 Watts uh for for for for that Baseline now so um there are benefits to this uh there are um UPS there are um limitations we just have to take a look at it but we we as as Filipino researchers we have to be
37:00 - 37:30 trying this out we have to apply this in our uh in our context that's why I'm inviting you to the October 2425 conference uh I I put it in the chat and finally uh so that we can talk about uh application areas in in the Philippines agriculture health and so on so how do we deal with the respons uh with AI we have to deal with AI responsibly looking at legal and Regulatory Frameworks focus on privacy fairness and Equity uh we
37:30 - 38:00 have to build local capacity in AI we have to look at multi-stakeholder that's why um it's a whole of society approach uh we we look to Finland for instance where they have a conscious effort to educate their citizens at least 10% of Vish citizens have undergone training in AI at least uh familiarization with the technology and Finland is now presenting itself as the educator of the entire
38:00 - 38:30 Europe advocacy for greater representation in global AI governance we understand that we don't have the compute uh right now I'm looking for 450 million so we can do an 8 node uh uh uh compute for AI uh that is uh that is really quite low but uh that's what what what it really amounts to so if you have 400 50 million pesos uh that can help AI
38:30 - 39:00 research at least in my University uh investment in AI enabled social research to prioritize well-being and Equity um this is not something that uh that is just an afterthought right from the get-go we have to design our systems to produce well-being and uh equity