AI – What To Expect Next Year

What were the biggest AI topics and trends of  2020, how about the latest AI predictions for 2021?   Stick around for the first AI News year in review to learn more.   Hey everyone, I’m Alex Castrounis and this is AI  News the number one show for the latest topics   and trends in artificial intelligence.

Subscribe to  this channel to get the latest episodes of AI News   and let’s dive in. For this year in review, we’re  going to recap a lot of the developments, trends   of 2020 in artificial intelligence as well  as talk about what to expect in AI in 2021.

Be sure to let us know in the comments of  anything from 2020 that we missed or anything   from 2021 that we should expect that we didn’t talk about today. To kick things off, let’s talk   about the elephant in the room, which is obviously  that 2020 uh was the year of the Covid pandemic,   which had major major impact globally  on people’s lives and everything else.

   Specifically when we’re thinking about artificial  intelligence it also had an impact there and one   of the things is that because of the pandemic  and the way it sort of reorganized how   people work, and remote work, and, you know, the  ways companies have had to adapt, it may have   accelerated AI adoption to some extent.

Also,  it might have, you know, impacted company data sets   significantly. So, if you think about it, you know,  companies often use their data to analyze, you know,   what happened in different quarters throughout  the year or just throughout the year in general   and maybe try and make data-driven decisions based  on that or see kind of how how their customers are   buying or any changes and trends, whatever the case may be.

Obviously, due to this pandemic, it’s had a   major impact on everything across the board from  people’s purchasing, where they purchase, how much   they purchase, if they purchase at all, and so many  other things and so in some ways the data that was   collected, generated and collected, throughout  the year 2020 is sort of very skewed obviously.

   You know, think of it like how seasonality  and certain, you know, holidays or whatever   might affect data and this is sort of a similar  thing but on a, you know, bigger scale obviously.   So time will tell, you know, how Covid really impacted um,   you know, the field of artificial intelligence,  but um certainly it has to some extent.

So   starting off with advancements and trends in  AI specifically, we’re going to go through 2020   stuff and then again move on to 2021 and talk about what to expect for the future. So in terms   of advancements and trends one of the biggest  advancements uh in artificial intelligence in 2020   was really in the areas of natural language  techniques like natural language processing, or NLP,   and natural language understanding, or NLU.

One of the big developments there was that   so-called transformer models really  took center stage and sort of propelled   the natural language um capabilities further than ever before. And one of the most notable   and recognizable examples of that is this GPT-3  model that OpenAI created and sort of released to   the world and sort of wowed everybody.

Also, can  be considered kind of scary by many as well, and   again let us know in the comments what you think  about some of these capabilities that we’re seeing   nowadays with models like GPT-3.

But we’ve also seen  new benchmarks in natural language processing and   understanding, you know, previously benchmarks  like GLUE and SuperGLUE have been sort of the   the industry standard um and then  you know now we’re starting to see   as these models get super good at dealing with  these benchmarks at like human-level intelligence   you know people are trying to create  new benchmarks that really push   this area of AI even further.

Another area in  natural language that’s really come a   long way is voice recognition and assistance.  So, you know, voice recognition specifically in,   you know, speech to text applications and so on.

  So clearly we’ve seen a lot of advancements there,   you know, voice recognition technologies  are getting better all the time so are   the assistants and they’re sort of better able  to answer questions, look things up, you know,   and so on.

Another area which is sort of similar  to GPT-3 and some of these transformer models   is the use of techniques like transfer learning  and fine-tuning. So previously these techniques   were pretty widely used in computer vision.

For  some of you that are familiar with this massive   image database that you can use called ImageNet,  you know, a lot of people in the computer vision   area had been pre-training models on this massive  sort of set of images in ImageNet and then, you   know, doing what’s called fine-tuning to apply this  pre-training or learning to their own specific   domain or problems that they have and we’re seeing  that now more and more being used in natural   language processing as well including with things  like those transformers and GPT-3 models and so on.

   Another thing that really was big in 2020 was this  idea of augmenting human intelligence over you   know full-scale job replacement or job automation.  What that means is that, you know, augmenting human   intelligence is all about, you know, using AI to  automate some of these, you know, repetitive,   tedious, rote, menial sort of whatever tasks that  people do day to day at their jobs.

Think of it   like administrative stuff and that sort of thing  um and so by automating some of that stuff with AI  it allows workers to actually focus more on you,  know, value ad type work be more and also be more   productive, be more creative, really do what they  do best and so uh in a lot of ways it’s kind of   a win-win because, you know, it allows again people  to really do all those things but also enjoy their   job more hopefully because they’re not doing all  these sort of menial and repetitive tedious tasks,   but it also because it results more in employee  sort of happiness and enjoyment potentially,   it helps companies with, you know, employee  retention, happier employees overall, and so on.

   Now that being said, you know, anything like  social or sorry um augmenting human intelligence   or job automation, in general, can always have  it’s, first of all, it’s very complicated the whole   there’s a lot of nuance to it and you know the  question of automation in general and you know   whether it’s good bad or otherwise is very  complicated and obviously these things can   also have potential socioeconomic impacts as  well including things like inequalities and so   on so a lot of the focus then needs to transfer  in many ways to you know reskilling, upskilling,   training, and so on for people to be able to work  in this sort of increasingly uh data powered and   you know advanced technology type world that we  live in where AI is one of these technologies.

   So we’re hearing more and more about that in 2020  and I expect we’ll hear a lot more about that   in 2021, definitely something worth keeping an  eye on, of course. Let us know in the comments   your thoughts on any of those things and if  you’ve seen any of uh that in in the real   world yourself.

Another area that sort of gained  a lot of steam in 2020 was this area of AI called   generative AI. So you know there’s sort of some  silly novel applications of that you may have seen,   what they call style transfer where you know maybe  take a picture of someone and then transfer it   to look like a Van Gogh painting or a Monet or  something, so sort of transferring one style to   an to something like a picture.

But the   the bigger news headline in generative AI in 2020   has really been more around what’s called deep fakes. Specifically audio and video deep fakes, so sort of   AI-generated audio or AI-generated images or video  where you know nothing you hear or see is actually   real, it’s generated by AI, and obviously that  could have huge sort of negative adverse impacts   on many things uh in society and so you know  people are it’s certainly the advancement   of that area has has definitely raised  concern amongst many and uh increased   you know we’re starting to see sort of increased  regulations and attention being paid to that so   uh that 2020 we saw more of that and again we’ll  see what happens uh in 2021 and beyond um and   of course, let us know your thoughts on on things  like deep fakes, which clearly have gotten   a lot of people talking uh recently.

Another area  that’s been big in 2020 was reinforcement learning.   So it’s not a new area of AI, but in terms of the  applications going beyond just playing video games   for example you know reinforcement learning  as an area of ai really gained a lot of steam   and notoriety at like winning games like chess and  and go and you know more in the game area, but now   we’re seeing applications in drug discovery in  you know automatically finding optimal um what   they call hyper parameters to find the optimal  neural network architectures and things like that   um and so on and so forth and a lot you  know control systems different things so   reinforcement learning has really gained a lot  of steam as well in 2020 and so that’ll be an   interesting area to keep an eye on as well so has  automation and we talked about automation before,   but really more automation as well in the area of  what they call RPA or robotic process automation   so basically taking like think of companies they  have a lot of different processes that they you   know operational processes or you know whatever  different sort of routines or processes that they   go through all the time um and so just trying to  sort of automate different different processes um   you know one at a time to kind of take care of  those things using these different techniques   like ai so my process automation has been getting  a lot bigger as well.

And then finally um hardware   types of advancements. Hardware, whether it’s  you know computer chips that process you know   are able to do the training of AI models or do  the infra actual inference so when you pass data   to them you know uh these things that hold the  models and then process the data and create a   an output or a prediction, for example, but as well  as like computing hardware too you know.

Different   um kind of you know now that everything’s really  heavily in the cloud you know different kinds   of cloud-based computing hardware or even  what they call edge-based hardware like edge being   things like mobile devices you know now more and  more mobile devices like smartphones are running   AI models right on the device.

And so then you’re  starting to see like you know edge AI as well as   embedded AI things like that um and so some of  the hardware has really come a long way in   terms of being small and cost-effective enough to  you know do these edge cases and embedded AI cases   as well as really take cloud computing to the  next level uh and making it a bit more accessible,   a bit less costly, and a bit faster to train  you know a bunch of different AI models.

   Moving on to AI applications and use cases you  know we’ve seen a lot more in 2020 really across   the board meaning in different industries and  different business functions you know it’s really   we’re really moving away from AI being this  like largely academic thing to AI being very much   deployed in production in the real-world.

  In real applications that we interact with   all the time whether those industries are you  know especially and very interestingly seeing   AI being used in real-world applications  in healthcare, biotechnology, and pharma,   to sort of benefit people right whether it’s  just discover more effective drugs quicker or   detect diseases earlier so that people can get  earlier treatments and better health outcomes.

   Or whatever the case may be we’ve really  seen a lot of that which is which is a good   thing in terms of thinking of AI as being  beneficial and being used for good purposes.   We’ve also seen AI being used benefit  in beneficial ways to protect animals   or environmental protection and things like  that, so that’s a really cool area of AI as well.

  And, in general, you know there’s a sort of more  people talking about AI to benefit people um   which I talk about a lot in my book, which is why  my book’s called “AI For People and Business”, but   also you know this idea of AI for good right or AI  for humankind um obviously AI can be quite scary   for a lot of reasons uh as well.

And so you know  the idea of AI being used for good as opposed   to bad is also related to this concept of AI  safety which we we’ve seen in 2020 people talking   more and more about. So the idea of you know again  AI not being used for harm but actually for good.

   So that’s something that that’s been happening  that’s been uh pretty interesting as well.   Now on the different AI tools and data front.  We’ve seen a lot more tools you know develop,   coming out, or being developed and advancing  more.

So one of those areas, aside from the heavy   hitters like you know TensorFlow or PyTorch or  Scikit-learn, like these common tools that a lot of   AI and machine learning engineers might develop  solutions with we’ve also seen sort of an   explosion in cloud platforms and APIs offered  by major cloud platforms like AWS and GCP and   Azure and different services.

Even deep learning  as a service type platforms, so definitely a lot   going on there in 2020 that’s been a big topic  as well as data in general just seeing a lot of   you know focus on the data that is used to create  and train these machine learning or AI models,   whether that’s techniques like data augmentation,  which means maybe you have a small data subset   and you need a lot more data so you use the  data that you do have and you kind of tweak it   and rework it a little bit so that you have more data.

So you augment your data with variations   of the data that you have, or synthetic data  where you just create data synthetically using   algorithms or programs of some sort. There’s  also been a focus on data democratization, so   you know making data more accessible to people  within the company or in general or to the public.

   Sort of less silos more availability  more accessibility and even things like   so-called open data. And now there’s a lot  of publicly available data sets as well   including even data search tools like Google  created a data search tool that you can just go to   do a search and you can kind of see all  sorts of data sets for any given sort of   type of data that you’re looking for all from.

  This one tool which is really interesting   in another area in 2020 there’s been a lot of  focus on sort of ethics safety and impact. I   kind of talked a little bit about that before, but  you know one of these ideas of again ethics like   right versus wrong, good versus bad, using  AI in those ways, but also this idea of   kind of AI responsibility or responsible AI.

  So you know not only using AI responsibly,   but being responsible for the way that AI is used  um i i mentioned the AI safety thing already, but   you know there’s just been more focus on the  ethics it’s making more news and headlines   especially in the areas of things like algorithmic  bias, AI fairness.

So you know AI that benefits   people should benefit all people not just  specific subsets of people. AI safety   and then you know AI is one of these things  that can have just like any technology   you know big impacts potentially on society,  economics, the environment, and so on.

So kind of   keeping an eye on those things and focusing  more on that. We’ve even seen frameworks that have   come out to kind of address those specific  areas that could be impacted by AI and how to   kind of recognize that how to plan for it, how  to mitigate any negative impacts, and so on.

   And as a result too there’s many more  organizations and partnerships, again   different frameworks and models even you  know guidelines and now even companies that are   consulting on AI ethics and so on.

So that’s really  been kind of a bigger thing in 2020 as well.   And another area kind of related to that is really  this idea of governance regulations and compliance.   So governance you know, whether it’s corporate  governance, like how companies you know manage the   data and the applications and things like AI and  so on, that they have and the processes behind it   and how they you know protect privacy  and implement security and so on, but also even   that the government level across  the world different governments how do they   create laws and regulations um to protect people’s  privacy let’s say security and so on.

And then   the idea of compliance not only with those  kinds of regulations but compliance as well with   even internal company you know  guidelines or best practices   or other standards that aren’t necessarily laws  or regulations per se but you know companies try and comply with them because  they’re the right thing to do.

And as such   just like AI responsibility or responsible AI there’s also this idea that we heard more about   in 2020 of accountability and accountable AI. You  know who’s accountable for when things go wrong   or when you know compliance is not happening or  whatever and so um you know we’ll see what happens.

   There’s a lot going on there you know obviously  GDPR is a big example of some of this in europe   the us has some similar things mainly in the  state of California. Canada has some regulations   like that in other countries as well so um we saw  more of that in 2020 and of course we’ll see more   uh moving forward too.

And then  finally in the 2020s or year in review   there was another area that you know kind of  gained some steam uh and also is worth looking   at which is AI education and talent first of all  there’s a talent shortage there has been that’s   not new in 2020, uh but you know there’s  definitely a talent shortage.

But what 2020   did do is you know a lot there’s a lot more  focus now on training and on learning and you know   different things like that so um we’ve seen a lot  more increase in sort of educational offerings or   training opportunities or whatever in artificial  intelligence, machine learning, data science, and   similar fields.

Also in terms of talent you know  there’s definitely been more of a focus, and   for a good reason, on diversity and inclusion and  you know that’s been an issue in technology for   quite some time and is especially an issue  in some of these areas like AI as well and   so it’.

s a good thing in my opinion people are  talking about that and focusing on that more   You know, of course, let us know in the comments  of any experiences you’ve seen out there   or thoughts on that but so we have seen more of  a focus on that in 2020 and more news about   that.

All right, now on to the 2021 predictions and  what to expect moving forward especially next year   and beyond so some of the things that people  are talking about again in advancements and   trends as the first category will kind of cover,  I mentioned RPA or robotic process automation   that’s certainly not going to slow down  so you should expect to see more of that   in 2021 and beyond and more deployments  of that you also are starting to hear   more this term hyper-automation.

So you’ll probably  hear more of that as well, but it’s basically   automation sort of automation using AI versus  traditional automation the concept of automation   isn’t new and isn’t necessarily AI-specific at  all other people have been automating things   for a very long time and pretty much throughout  the entire industrial age really since the   industrial revolution.

Especially more and more  with software and algorithms writing computer code,   most of the software we’ve used well before AI was  really a thing or used widely in the real-world.   A lot of things were being automated that we  used or or that you’ve come across in companies   uh and so on but you know now this idea of AI based or AI-powered automation sometimes   called hyper-automation so seeing more and  more of that.

Another area that’s interesting   is simulation and this concept of digital twins so  you know I talked about synthetic data before so   one thing we’ll see more and more of is companies  sort of creating simulations to auto-generate   synthetic data but then also creating sort of  replicas of their company or certain processes   that their company goes through or even like  a whole supply chain.

Let’s say you create   almost like a computer simulation of all  of that right what they call a digital twin   and then you could run that simulation with  different parameters and things like that   so that you can kind of automatically generate  data, but you can also see how different models   and different you know sort of pulling different  levers or making different decisions or whatever   affects the digital twin before you actually implement that in the real world, so   definitely something to keep an eye out for in  2021 and beyond.

As I mentioned natural language   processing and understanding is a hot area  of AI and will continue to be so expect   advancements in that specifically automated  speech recognition or ASR conversational   intelligence, virtual assistants, chatbots, and you know.

Lately, people have been talking about   a new version of GPT-3 called GPT-4, which  may have one trillion or more parameters.   For those of you that know what that means you’ll  know that GPT-3 had billions and billions and   billions of parameters and was a huge model  and so you know largely the reason   it made so much progress is sort of that size  that you know the idea is that the more parameters   you have the closer and closer and closer you’re  getting to maybe how the human brain works and how   many neurons and synapses the human brain has so  keep an eye out for that see whether there is a   GPT-4 and whether you know it sort of sets the  new record for one trillion or more parameters.

   Reinforcement learning, expect to see more there,  more development, more real-world applications, and   again branching out more and more from just you  know playing games and beating games and then also   keep an eye out for new techniques so you know self-supervised learning.

For example, I did a video   earlier on my channel, you know a little  while back on self-supervised learning.   That’s a super interesting area of AI and  it’s a whole different way of doing kind   of learning as opposed to some of those other  techniques like supervised or unsupervised learning,   so keep an eye for that.

The other thing is you  know a lot of companies don’t necessarily have   that much data and you know we talked about  the synthetic stuff or simulation but also   there are these areas of AI that focus on  what’s called zero-shot or few-shot learning   so the idea that you can have models  that could be trained on very few or no   data samples, so basically having very little  data, or no data.

Self-supervised learning   is kind of like and when I say no  data I don’t mean no data whatsoever but   maybe no labeled data for example in the case  of supervised learning. But anyway that should   be an area to keep an eye on as well because  there’s a lot of active research going on there and an interest in being able to create these models  that don’t need much data at all to you know   be usable and effective.

Another thing is  you know if you’re familiar with this idea   of descriptive analytics and predictive  analytics so like looking in the past,   looking at data, understanding what the data  tells you from a historic perspective, then   predictive analytics is this idea of taking that  step further and making predictions using models   like AI and machine learning models then the  next step from there is prescriptive analytics so   not only making predictions but also you know  optimizing certain things based on the data   you have or the predictions that these models  can make as well as even recommending actions   or recommending you know decisions  or automating actions and decisions   based on these sort of prescriptive AI models.

  That’s going to be an interesting area to keep   an eye out for and something you should expect  to see more of, and some people refer to this   as well as decision science. As opposed  to just data science, if you hear the term,   decision science think about you know that kind  of thing where you know you’re finding ways to   automate decision making or taking actions  or whatever so on the tools and data side of   things you know um over the years especially in  software develop traditional software development   and things like building mobile apps or  SAS platforms or whatever the case may be   there’s been a much bigger focus on what’s called  ops, specifically on DevOps, so the idea of like you   know creating these deployment sort of automated  deployments of software and updates to um   the software so when you’re releasing new versions  of it or new features but also like building out   infrastructures cloud-based infrastructures kind  of automatically using tools like terraform and   so on.

And that spawned this whole sort of  new area of ops called data ops sometimes   people refer to that as well as almost  synonymously with data engineering so like   creating data pipelines you know data back ends  like data lakes, data warehouses, and that kind of   thing as well as on the analytics side like ML ops  or machine learning ops and AI ops so you know how  you deploy these models, how do you version track  them, how do you you know swap them out as   you increase them, how do you monitor what’s called  model drift so maybe over time the model isn’t   performing as well or being as accurate because  data changes and think you know conditions change.

   So how do you monitor all that? How do you alert on  that? How do you you know version control? All kinds   of things you’re doing and so on so that’s  definitely an area to keep an eye out for in 2021.

   Gaining a lot of steam as well as a similar area  called AutoML, which is you know sort of automated   machine learning. The idea is, you know, right now  a lot of how data scientists and machine learning   engineers build AI and machine learning models,  is by tweaking all these parameters trying a lot   of different models, it’s very experimental, it’s  very trial and error, it’s time-consuming you have   to try a lot of different kinds of models  and even within the same type of model   tweaking that model, what’s called hyperparameters  to find kind of the optimal model.

So the idea of   AutoML is not only to help find those best  performing models as quickly as possible, but   even to automate some of the deployment of the  models whether it’s deploying to a restful API   endpoint or something that you access via  the web or HTTPS or whatever the case may be.

   We’re starting to see more and more of that being  applied to sort upstream of that, which is more   the data preparation side. So how do you you know, a  lot of people will say like 80 percent of the time   spent building AI and machine learning  models isn’t actually the model training   our model development part, but rather the  data preparation part, or what they call data   wrangling or data munching, so you know sort  of automating some of that data preparation   to clean the data, prepare the data, make  sure all the fields and variables and   and whatever are normalized the right way if they  need that or what they call standardized and so on.

   And then lastly in the data side of  things, this idea of data democratization   that I mentioned before in data availability  that’ll be an area of 2021 to keep an eye on   as well. We’re seeing more and more of that all  the time.

We’ll see what the future brings as well in the area of ethics, safety, and  impact that I mentioned before. Again,   I think responsible and accountable AI  is becoming much more talked about than   previously and should continue to be moving  forward as well as AI fairness and safety   that we mentioned as well.

So keeping an eye out on  like you know what are the new are we going to see   more guidelines and standards and best practices  on that stuff? Are we going to see more regulations,   especially around privacy and security, which  we’re already starting to see with things like   GDPR and the California Consumer Privacy Act or  CCPA? And then there’s even an area of AI called   federated learning that is being much more  talked about and should see advancements in   in the future.

I talked about that in some  previous videos, but basically this idea of   sort of distributing the learning onto devices,  like kind of the edge thing, so that there’s not   one central database that could be potentially  you know breached or something like that   and kind of making almost like private models for  individuals like you have your own cell phone.

And   maybe your cell phone is training models that  are good for you and work for you but aren’t   accessible to anybody else. Your data is not being  exposed anywhere else and so on. So that’s   sort of some of those ideas of federated learning  and 2021 should bring about more of that.

   And as I mentioned around this idea of  governance regulations and compliance,   not only keeping an eye on some of the  things I just mentioned but also there’s   these concept of AI explainability,  interpretability, and transparency as well.

   And so, you know, there’s an explainability is  the idea of being able to explain how   an AI model works and how it makes decisions.  One of the problems is you have these things   so-called black box models like deep learning  often uses these like, you know, very intense neural   networks or you have random forest which is a type  of decision trees and you know someone asks you   well how does this thing work? How does  it make decisions? It could be very hard to explain   in plain English, if not impossible, because of  how complex the models are and how they work   under the surface and so this idea of creating  models that are explainable or techniques to   take these more black-box algorithms like neural  networks, deep learning and making them explainable   as well as interpretable.

So really being able to  understand you know what specific factors are they   prioritizing over others. Which parameters  and which variables are having the biggest   influence on the outcome of the model.

So let’s say  you’re trying to predict something like a stock   market price or whether you know an image  has a cat in it, for example, you know. How do you   interpret which things  are having a bigger influence on that ability   to accurately predict a stock price or  accurately detect a cat in an image versus   other things that might be in the model  but not have that big of an effect?   And then transparency, which is just really  the idea of you know understanding like   having access to see how these models are working,  or what they’re being used for, or who’s making   decisions around them, or you know basically you  know making things more available and more open   to whoever.

So other areas to keep an eye out  for sure. And then finally you know the education   talent thing category that we talked about for  2020. Again there’s a talent gap so I think you   know this new focus on this idea of data literacy  you’re hearing more and more about.

So helping   people better understand, speak the language  of data, and understand how to use data to do   things at within their organization. Or to help  their organization or to help people like we said   sort of AI for good, that sort of thing and  just seeing more and more offerings out there.

   Educational ones, I have my own and so I do a lot  of training, I do a lot of speaking, I wrote my book.   I’ve been on sort of this mission to help  people improve their data literacy more and more   for a long time as well as AI literacy.

I have my own organization called whyofai, which you can find at whyofai.com. And the idea there  is to provide sort of AI training and education   to specifically to more non-technical  traditionally, non-tech, or typically non-technical   folks like executives managers and entrepreneurs,  but also for practitioners that are more hands-on   and technical but that want to look through the  lens of business and strategy and things like that.

   So that’s another thing but of course, there’s so  much out there nowadays so you know we’re going to   see more and more of that. So in general there’s  been a lot going on in 2020 across the board   not AI-related at all and then of course a lot  in AI as well just like we talked about.

And   2021 is certainly going to be exciting in terms  of seeing all these developments in AI across all   the different categories that we talked about. Let  us know in the comments like I said, what   are your thoughts? Did we miss, did I miss anything  for 2020? Are there other things that we should   be looking at for 2021 and beyond that wasn’t  discussed here today? One thing I think is worth   mentioning is that AI has always had a  lot of hype around it and so determining what is   hype versus what is reality is not necessarily  easy for a lot of folks, but also people tend to be overly optimistic often, I found,  and so the hype, even the predictions and   the pace of development and advancement tends  to be a little overly optimistic compared to   what we really see.

Autonomous vehicles are a  great example of that, so we’ll see. Keep in   mind who knows how quick things will  move and whether maybe it is overly optimistic   or maybe not. Let us know what you think in  the comments.

All right well that’s it for   the first-ever AI News year in review. Hopefully  you learned a lot about what happened   in 2020 and also what to expect in 202. Be  sure to subscribe to this channel if you haven’t   already and check out the description below for  more information and resources to help you along   your data and AI learning journey.

Thanks again,  and I’ll see you in the next episode of AI News.

You May Also Like

Get Your Download Immediately

Get Instant access to our Keto Recipe Ebook

You have Successfully Subscribed!