Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierMarina Mele's site

AI and Fundamental Rights: How the AI Act Aims to Protect Individuals

Artificial Intelligence (AI) is increasingly becoming a part of our lives. From facial recognition systems to self-driving cars, AI technologies are changing the way we live and work. But with this increasing presence of AI in our lives comes a need to ensure that it is used in a safe, ethical, and responsible way.

In response to this need, the European Union has proposed the Artificial Intelligence Act (AI Act). This proposed regulation seeks to ensure that AI is developed and used in a way that protects the fundamental rights and freedoms of individuals and society. It sets out a number of requirements for AI systems, such as requiring human oversight, fairness, non-discrimination, privacy, data protection, safety, and robustness.

This blog post will look at the AI Act in more detail, exploring its purpose, its categories, and its main concerns. We will also look at how the Act is designed to protect individuals fundamental rights and how it can be implemented in a way that ensures AI is used for good.

What is the AI Act?

The Artificial Intelligence Act (AI Act) is a proposed regulation of the European Union that aims to introduce a common regulatory and legal framework for artificial intelligence. It was proposed by the European Commission on 21 April 2021 and is currently being negotiated by the European Parliament and the Council of the European Union.

The purpose of the AI Act is to ensure that AI is developed and used in a way that is safe, ethical, and responsible. The Act sets out a number of requirements for AI systems, including requirements for human oversight, fairness, non-discrimination, privacy, data protection, safety, and robustness.

The AI Act is a complex piece of legislation, but it has the potential to ensure that AI is used in a way that benefits society. The Act is still under negotiation, but it is expected to come into force in 2026.

Categories of the AI Act

The AI Act defines three categories of AI systems:

  • Unacceptable risk: These systems are banned, such as those that use AI for social scoring or for mass surveillance.
  • High risk: These systems are subject to specific legal requirements, such as those that use AI for facial recognition or for hiring decisions.
  • Minimal risk: These systems are largely unregulated, but they must still comply with general EU law, such as the General Data Protection Regulation (GDPR).

Let’s see each of these categories in more detail.

Unacceptable risk

The AI Act defines unacceptable risk systems as those that pose a serious threat to the fundamental rights and freedoms of natural persons, such as their right to privacy, non-discrimination, or physical integrity.

Some examples of unacceptable risk systems include:

  • Social scoring systems: These systems use AI to assign a score to individuals based on their behavior, such as their spending habits or their social media activity. These systems can be used to discriminate against individuals or to restrict their access to services.
  • Mass surveillance systems: These systems use AI to collect and analyze large amounts of data about individuals, such as their location, their communications, and their online activity. These systems can be used to violate individuals’ privacy and to target them with discrimination or violence.
  • Biometric identification systems: These systems use AI to identify individuals based on their biometric data, such as their fingerprints, their facial features, or their voice. These systems can be used to track individuals without their consent and to deny them access to services.

The AI Act prohibits the development and use of unacceptable risk systems. This means that companies and organizations cannot develop or use these systems in the European Union.

There are a few exceptions to the prohibition on unacceptable risk systems. For example, the prohibition does not apply to systems that are used by law enforcement agencies for the prevention or detection of crime. However, even in these cases, the systems must be used in a way that complies with the law and that does not violate individuals’ fundamental rights.

The prohibition on unacceptable risk systems is an important part of the AI Act. It is designed to protect individuals’ fundamental rights and to ensure that AI is used in a way that is safe and ethical.

High Risk

The AI Act defines high-risk systems as those that pose a significant threat to the safety or fundamental rights of natural persons, such as their right to life, health, or property.

Some examples of high-risk systems include:

  • Facial recognition systems: These systems use AI to identify individuals based on their facial features. These systems can be used to track individuals without their consent, to deny them access to services, or to target them with discrimination or violence.
  • Hiring decision systems: These systems use AI to make hiring decisions. These systems can be used to discriminate against individuals on the basis of their race, gender, or other protected characteristics.
  • Credit scoring systems: These systems use AI to assess the creditworthiness of individuals. These systems can be used to deny individuals access to credit or to charge them higher interest rates.
  • Medical diagnosis systems: These systems use AI to diagnose medical conditions. These systems can be used to make mistakes that could have serious consequences for patients’ health.

The AI Act sets out specific requirements for high-risk AI systems. These requirements include:

  • Human oversight: High-risk AI systems must be designed in a way that allows for human oversight. This means that there must be a way for humans to understand how the system works and to intervene if necessary.
  • Fairness and non-discrimination: High-risk AI systems must not be used in a way that discriminates against individuals or groups of people.
  • Privacy and data protection: High-risk AI systems must comply with the GDPR and other EU data protection laws.
  • Safety and robustness: High-risk AI systems must be designed in a way that minimizes the risk of harm to individuals or society.

The AI Act also requires providers of high-risk AI systems to register their systems with a central EU database. This will allow the authorities to monitor the use of these systems and to take action if they are used in a way that violates the law.

The requirements for high-risk AI systems are designed to ensure that these systems are used in a safe and ethical way. They will help to protect individuals’ fundamental rights and to ensure that AI is used for good.

Minimal Risk

The AI Act defines minimal risk systems as those that do not pose any significant threat to the safety or fundamental rights of natural persons. This means that they are considered to be relatively safe and ethical.

Some examples of minimal risk systems include:

  • Chatbots: These systems use AI to simulate conversation with humans. They are often used in customer service applications.
  • Online recommendation systems: These systems use AI to recommend products or services to users. They are often used in e-commerce applications.
  • Spam filters: These systems use AI to identify and filter out spam emails.
  • Fraud detection systems: These systems use AI to identify and prevent fraudulent transactions.

The AI Act does not impose any specific requirements on minimal risk systems. However, they must still comply with general EU law, such as the General Data Protection Regulation (GDPR).

The AI Act also requires providers of minimal risk systems to make certain information publicly available, such as the purpose of the system and the data that it uses. This will allow users to make informed decisions about whether or not to use these systems.

The minimal risk category is designed to ensure that AI systems that are considered to be relatively safe and ethical are not overregulated. This will help to promote the development and use of these systems, which can benefit society in a number of ways.

Main concerns of the AI Act

The AI Act is a complex piece of legislation that has been met with mixed reactions from the AI community. Some people have praised the Act for its ambitious approach to regulating AI, while others have criticized it for being too complex and burdensome.

Here are some of the main concerns that have been raised about the AI Act:

  • The definition of AI is too broad. The AI Act defines AI as “a system that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” This definition is so broad that it could include a wide range of systems, from simple chatbots to complex self-driving cars. This has led to concerns that the Act could be overreaching and could stifle innovation.
  • The requirements for high-risk AI systems are too burdensome. The AI Act requires providers of high-risk AI systems to register their systems with a central EU database, to have human oversight, and to carry out impact assessments. These requirements are seen by some as being too burdensome, especially for small businesses.
  • The penalties for non-compliance are too weak. The AI Act provides for fines of up to €20 million or 4% of global turnover for non-compliance. However, some people have argued that these penalties are not enough to deter companies from breaking the law.

These are just some of the main concerns that have been raised about the AI Act. It is important to note that the Act is still under negotiation, so it is possible that some of these concerns will be addressed before it comes into force. However, the Act is a complex piece of legislation, and it is likely that there will be further debate about it in the coming months and years.

Conclusions

The AI Act is a proposed regulation from the European Union that aims to ensure that AI is developed and used in a safe, ethical, and responsible way. It sets out a number of requirements for AI systems, such as requiring human oversight, fairness, non-discrimination, privacy, data protection, safety, and robustness. The Act is still under negotiation, but it is expected to come into force in 2026.

The AI Act is an ambitious attempt to regulate AI in the European Union. It sets out a number of requirements that are designed to ensure that AI is used in a way that benefits society. However, the Act has also been met with mixed reactions, and there are still a number of concerns that need to be addressed before it comes into force.

For example, some people have expressed concern that the definition of AI is too broad, and that the requirements for high-risk AI systems are too burdensome. Others have argued that the penalties for non-compliance are too weak.

Ultimately, the AI Act is a complex piece of legislation that will have a significant impact on the development and use of AI in the European Union. It is important that these concerns are addressed, and that the Act is implemented in a way that ensures that AI is used in a safe, ethical, and responsible way.

 

The post AI and Fundamental Rights: How the AI Act Aims to Protect Individuals appeared first on Marina Mele's site.

Children are not responsible for other people’s emotions

Sometimes, when we talk to our kids, we use phrases that can make them feel responsible for the emotions of adults. In this article, I explain which phrases and what problems there are in using them.

Do these phrases sound familiar to you?

You’ve told your daughter three times to pick up the toy she was playing with and left on the dining room floor, and each time, she didn’t answer or even listen. On the fourth time, you say something like, I’ve told you several times to pick up the toy, you’re making me angry!

Guests have come over for dinner, and your child doesn’t want to sit next to one of them, and the guest says, You don’t want to sit next to me? Oh, you’re making me sad.

You’re trying to do something, and the kids are running around and shouting, and they’re not listening when you tell them to calm down, and eventually you say, Can you stop? You’re making me nervous!

Or some day when you’re playing really well with the kids, you might say, I’m happy when you’re happy.

What’s the problem with these phrases? Well, if you pay attention, these phrases create a direct relationship between the actions of the children and the emotions of the parents or adults. The child is responsible for making their parents angry because they didn’t pick up their toys,. The child is responsible for making a guest sad because they didn’t want to sit next to them. The child is responsible for making their father nervous or the child is responsible for making their mother happy.

Phrases like these end up making children responsible for the emotions of adults. What a heavy burden for a child, don’t you think?

Everyone is responsible for their own emotions.

When my children don’t answer me, I feel frustrated. Sometimes I even get angry because I repeat something to them several times, they look at me, and continue playing as if nothing happened.

But it’s important to realize that on the one hand, the child is having a great time, is distracted, absorbed in the game or situation, and their mind is not thinking, I’m going to ignore what my parents say and not answer. It’s happened to me several times when I raise my voice to get their attention and they react with, What happened? Did you say anything?

And on the other hand, it’s me who has a preconceived idea in my mind of what should happen, what we should do, how they should answer or react. And if that idea is not fulfilled, it generates discomfort and frustration for me. Let me explain it a litter bit better:

  • For example, in my head I think children should pick up their toys when I ask them to. If they don’t listen, I get frustrated and maybe even angry.

  • We’re running late for school and the children are calmly looking for a toy to take with them on the way. I get stressed because I want to arrive on time and I see that if we continue like this, we won’t make it.

In all these situations, I am responsible for what I feel and how I act, not others. And when I realize and accept it, I start thinking constructively:

When they don’t listen to me, I approach them, touch their arm to get their attention, ask them to look at me for a moment, and when I have their full attention and make sure they are listening, then I can say something like: I see that you’re having a lot of fun playing, but now it’s time for dinner and we need to pick up everything you left in the dining room. After dinner, or tomorrow you can continue playing.

When we’re running late for school, I have to accept that yes, we will be late, but I can look for some game to walk faster to school. And the next day, maybe get up 5 minutes earlier, or not get distracted when we’re all having breakfast, and start the routine of leaving a little earlier.

Why am I explaining all this? Well, because I think it’s important to be aware that we are the only ones responsible for our emotions and how we act on them. This way, we won’t blame our children, and we can also teach them that they are responsible for their emotions.

Why is it important to teach children that they are responsible for their own emotions?

Phrases like you’re making me angry can make a child feel responsible for the happiness or emotions of adults, which can put a lot of pressure on the child.

If they see their parent sad or angry, they may think, I did something to make them feel that way. But who knows if their parent is feeling that way because of something unrelated to the child, or as often happens, it’s because many things combined. But children can end up feeling guilty and responsible for the emotions of adults.

This can also cause children to be more sensitive to manipulation or emotional blackmail: for example, doing something they’ve been asked to do to make the others happy, whether they be adults, schoolmates, cousins, etc.

But it’s not just that; if they’re sad or angry, they also delegate the responsibility for causing those emotions to someone else. They believe that someone else has caused their emotions, and therefore, they expect someone else to fix them.

Teaching our children that they are responsible for their own emotions and actions gives them control over and teaches how to manage them. We need to teach them that all emotions are natural, and that sometimes they don’t have control over what they feel, but they do have control over how they act.

How can we respond to these situations?

One of the things that helps me when I read about any topic is examples of how to apply what I just read. They help me think about situations where I can do things differently, where I can improve. But also the opposite: they make it easier for me to remember what I’ve read when I’m in these situations and how to act.

Here are some examples:

We can express our feelings, but do so from the perspective of our responsibility, not blaming others.

  • I am sad because I had a bad day at work, but I know it will pass.

  • I am sad because I didn’t get what I wanted, but I need to accept the situation and look for other options.

  • I am experiencing a lot of frustration right now, but it has nothing to do with you. It’s just something that’s happening within me, and I’m managing it.

Teach our children that emotions are neither good nor bad, they are natural.

  • It’s normal to feel sad or angry sometimes. It’s a normal part of life, and we can learn to manage these emotions.

  • Feeling sadness or anger is not bad, it’s just means that something is affecting us.

  • Don’t worry about feeling nervous or scared, it’s normal, and we all feel it sometimes.

Teach our children that sometimes we can’t control our emotions, but we are responsible for our actions.

  • I understand that you are angry, but hitting your sister is not okay.

  • Sometimes emotions can be very intense and difficult to control, but we can always decide how to react to these emotions.

  • It’s normal to feel frustrated or sad, but it’s important not to let these emotions take over and think before we react.

If you have any useful phrases that can help children take responsibility for their emotions and learn to manage them, please add them to the comments so we can include them here too. Thank you!

In summary, it’s important for children to understand that they are not responsible for their parents’ happiness. Happiness is an individual responsibility, and we shouldn’t burden other people, especially our children, with it. It’s important for children to learn to take charge of their own happiness and to be allowed to make decisions and express their emotions. This not only makes them happier, but it also helps them develop social and emotional skills that will be very useful for them in the future.

The post Children are not responsible for other people’s emotions appeared first on Marina Mele's site.

3rd-grade & Karatsuba multiplication Algorithms

In this post we’re going to study the third-grade algorithm to compute the product of two numbers, and we’re going to compare it with a much more efficient algorithm: The Karatsuba multiplication algorithm.

Did you know that these two algorithms are the ones used in the built in Python multiplication?

We will talk about the Order of both algorithms and give you Python implementations of both of them.

Maybe you’re thinking, why is she writing now about algorithms? Some time ago, I took the course Algorithms: Design and Analysis, Part 1&2, by Tim Roughgarden,  and it was a lot of fun.

Now I’m taking the course again, but I’m spending more time to review the algorithms and play with them. I really encourage you to take the course if you have some time. But first, read this post 🙂

This is the outline:

  • Third grade multiplication algorithm
  • Python code for the third grade product algorithm
  • The Karatsuba Algorithm
  • Python code for the Karatsuba algorithm

Let’s start! 🙂

Third grade multiplication algorithm

First, we’re going to review the third grade algorithm, which all of you already know 🙂

Let’s start with these two numbers: 5678 x 1234. In order to compute their product you start with 4*5678, represented as:

  (2)(3)(3)
   5  6  7  8
x  1  2  3 |4|
-------------
2  2  7  1  2

Let’s count the number of operations performed in this step:

– If n is the number of digits of the first number, there are n products and at most n sums (carried numbers).

– So in total, you need 2n operations.

If we continue with the product, we have to repeat n times the same operation (where we assume that n is also the number of digits of the second number):

         5  6  7  8
      x  1  2  3  4
      -------------
      2  2  7  1  2
   1  7  0  3  4
1  1  3  5  6
5  6  7  8

which gives a total of 2n2 operations.

Finally, you need to sum all these numbers,

            5  6  7  8
         x  1  2  3  4
         -------------
         2  2  7  1  2
      1  7  0  3  4
   1  1  3  5  6
+  5  6  7  8
----------------------
   7  0  0  6  6  5  2

which takes around n2 operations more.

So in total, the number of operations is ~3n2, which means that it’s quadratic with the input (proportional to n2).

One point I would like to mention about this algorithm is that it’s complete: no matter what x and y you start with, if you perform correctly the algorithm, the algorithm terminates and finds the correct solution.

Python code for the third grade product algorithm

Here I give you an example of an implementation of the third grade algorithm, where I have included a Counter for the sum and product operations:

# third_grade_algorithm.py

def counted(fn):
    # Counter Decorator
    def wrapper(*args, **kwargs):
        if "" in args or " " in args:
            return "".join(map(lambda s: s.strip(), args))
        wrapper.called += 1
        return fn(*args, **kwargs)
    wrapper.called = 0
    wrapper.__name__ = fn.__name__
    return wrapper


@counted
def prod(x, y):
    # x, y are strings --> returns a string of x*y
    return str(eval("%s * %s" % (x, y)))


@counted
def suma(x, y):
    # x, y are strings --> returns a string of x+y
    return str(eval("%s + %s" % (x, y)))


def one_to_n_product(d, x):
    """d is a single digit, x is n-digit --> returns a string of d*x
    """
    result = ""
    carry = "0"
    for i, digit in enumerate(reversed(x)):
        r = suma(prod(d, digit), carry)
        carry, digit = r[:-1], r[-1]
        result = digit + result
    return carry + result


def sum_middle_products(middle_products):
    # middle_products is a list of strings --> returns a string
    max_length = max([len(md) for md in middle_products])
    for i, md in enumerate(middle_products):
        middle_products[i] = " " * (max_length - len(md)) + md
    carry = "0"
    result = ""
    for i in range(1, max_length + 1):
        row = [carry] + [md[-i] for md in middle_products]
        r = reduce(suma, row)
        carry, digit = r[:-1], r[-1]
        result = digit + result
    return carry + result


def algorithm(x, y):
    # x, y are integers --> returns an integer, x*y
    x, y = str(x), str(y)
    middle_products = []
    for i, digit in enumerate(reversed(y)):
        middle_products.append(one_to_n_product(digit, x) + " " * i)
    return int(sum_middle_products(middle_products))

Using that algorithm, if you run

$ python -i third_grade_algorithm.py

where (third_grade_algorithm.py is the name of the file), you will run the previous code and terminate with the Python console open. This way you can call your algorithm function and try:

>>> algorithm(6885, 1600)
1101600
>>> print("Suma was called %i times" % suma.called)
Suma was called 20 times 
>>> print("Prod was called %i times" % prod.called)
Prod was called 16 times

So it took 20 + 16 = 36 operations to compute the product of these two numbers.

Once we have this code, we can average the number of operations used in the product of n-digit numbers:

def random_prod(n):
    suma.called = 0
    prod.called = 0
    x = randint(pow(10, n - 1), pow(10, n))
    y = randint(pow(10, n - 1), pow(10, n))
    algorithm(str(x), str(y))
    return suma.called + prod.called


def average():
    ntimes = 200
    nmax = 10
    result = []
    for n in range(nmax):
        avg = sum([random_prod(n + 1) for i in range(ntimes)]) / float(ntimes)
        result.append([n + 1, avg])
    return result

In the following figure, we plot the result of the previous experiment when ntimes = 200 samples and nmax = 10 digits:

operations-3rd-grade-algorithm

We can see that these points fit the curve f(n) = 2.7 n2.

Note that 2.7 < 3, the proportionality factor we deduced before. This is because we were assuming the worst case scenario, whereas the average product takes into account random numbers.

To sum up, the 3rd grade algorithm is an Ο(n2) complete algorithm.

But can we do better? Is there an algorithm that performs the product of two numbers quicker?

The answer is yes, and in the following section we will study one of them: The Karatsuba Algorithm.

The Karatsuba Algorithm

Let’s first explain this algorithm through an example:

Imagine you want to compute again the product of these two numbers:

x = 5678
y = 1234

In order to do so, we will first decompose them in a singular way:

x = 5678
a = 56; b = 78
--> x = 100 * a + b

and the same for y:

y = 1234
c = 12; d = 34
--> y = 100 * c + d

If we want to know the product x * y:

xy = (100 * a + b)(100 * c + d) = 100^2 * ac + 100 * (ad + bc) + bd

Where we can calculate the following 3 parts separately:

A = ac 
B = ad + bc
C = bd

However, we need two compute two products for the B term. Let’s see how we can compute only one product to reduce calculations 🙂

If we expand the product:

D = (a+b) * (c+d) = ac + ad + bc + bd

we see than the right hand side of the equation contains all A,B and C terms.

In particular, if we isolate B = ad + bc, we get:

B = D - ac - bd = D - A - C

Great! Now we only need three smaller products in order to compute x * y:

xy = 100^2 A + 100(D - A - C) + C
A = ac
B = (a+b)(c+d)
C = bd

Let’s put some numbers into the previous expressions:

xy = 100^2 * A + 100 * (D - A - C) + C
A = ac = 56 * 12 = 672
C = bd = 78 * 34 = 2652
D = (a+b)(c+d) = (56 + 78)(12 + 34) = 134 * 46 = 6164

xy = 100^2 * 672 + 100 * (6164 - 672 - 2652) + 2652 
   = 6720000 + 284000 + 2652
   = 7006652

Yes! the result of 1234 * 5678 = 7006652! 🙂

In the next section, we’ll see the pseudo code for the Karatsuba Algorithm and we’re going to translate it into Python code.

Python code for the Karatsuba Algorithm

In general though, we don’t have 4-digit numbers but n-digit numbers. In that case, the decomposition to be made is:

x = n-digit number
m = n/2 if n is even
m = (n+1)/2 if n is odd

a = 10^m * x1 + x2
--> x1 = First (n-m) digits of x
--> x2 = Last m digits of x

which is slightly different if n is even or odd.

In pseudocode, the Karatsuba Algorithm is:

procedure karatsuba(x, y)
  /* Base Case */
  if (x < 10) or (y < 10)
    return x*y
  /* calculates the number of digits of the numbers */
  m = max(size_base10(x), size_base10(y))
  m2 = m/2
  /* split the digit sequences about the middle */
  a, b = split_at(x, m2)
  c, d = split_at(y, m2)
  /* 3 products of numbers with half the size */
  A = karatsuba(a, c)
  C = karatsuba(b, d)
  D = karatsuba(a+b, c+d)
  return A*10^(2*m2) + (D-A-C)*10^(m2) + C

And the order of this algorithm Ο(nlog23) ≈ Ο(n1.585) is (more info).

Let’s write now the Python code 🙂

def karatsuba(x, y):
    # Base case
    if x < 10 or y < 10:
        return x * y
    # Calculate the number of digits of the numbers
    sx, sy = str(x), str(y)
    m2 = max(len(sx), len(sy)) / 2
    # Split the digit sequences about the middle
    ix, iy = len(sx) - m2, len(sy) - m2
    a, b = int(sx[:ix]), int(sx[ix:])
    c, d = int(sy[:iy]), int(sy[iy:])
    # 3 products of numbers with half the size
    A = karatsuba(a, c)
    C = karatsuba(b, d)
    D = karatsuba(a + b, c + d)
    return A * 10**(2 * m2) + (D - A - C) * 10**m2 + C

assert(karatsuba(1234, 5678) == 7006652)

Which is much simpler than the third-grade algorithm right?

Moreover, if we can compare the number of calculations of this algorithm with respect to the third-grade one:

Karatsuba vs 3rd grade algorithms

where the red line is the Karatsuba algorithm, and the blue lines the 3rd grade one (see above).

Finally, I want to write here one of the quotes mentioned in the Algorithms course, which might encourage you to find a better solution 🙂

Perhaps the most important principle of the good algorithm designer is to refuse to be content.Aho, Hopcroft and Ullman, The Design and Analysis of computer algorithms, 1974

The post 3rd-grade & Karatsuba multiplication Algorithms appeared first on Marina Mele's site.

❌