Archive

Archive for the ‘Problem’ Category

Python: Machine Learning – Part 3

December 3, 2019 Leave a comment


Learning :Python and Machine Learning Part 3
Subject: Implementation and saving ML-Model

After creating a data-set and use it to train a ML model and make sure that it works fine and give a height accuracy predictions (Click here to read: Python and Machine Learning Part 2 ), we may or say we need to keep this model trained and re-use it on any actual data. In many real-life ML to training the model may take time with huge train data in image recognition or voice recognition models, so we need to keep the model trained even if we exit the application. To do this in sklearn we will use the “Model persistence” document page and use the joblib serialization.

First we need to import joblib , also import so to print out the file name and the path, we will use two functions in joblib (dump and load) in save_trained_model we will use the dump. Her is the code.

 # Function to save a trained ML-Model

  import joblib, os  # To Import  joblib and os
  
  def save_trained_model(model_name):
    
      print('\n  You select to save the trained ML model.')
      ml_name = input('  Enter a file name: ')
      joblib.dump(model_name, ml_name)
      print('\n  --> ML Model been saved.\n')
      print('   File Name is :',ml_name)  # To print-out the file name 
      print('   File Path is :',os.path.abspath(ml_name))  # To print-out the file path
      print('\n\n Do you want to save the ML trained Model? (Y,N): ' )
      if input('') in ['y','Y'] :
        save_trained_model(ML_trained_model)


Now after we save our trained ML-Model we want to load it and use it in our ML program without training our machine. I will use the function new_test_data() from part 2 and pass the ML trained model to it. And to do this, first we need to load the trained ML-Mode. So let’s do it.

 # Function to load trained ML-Model
  
def load_ML_Model(ML_filename):
    the_trained_model= joblib.load(ML_filename)
    
    return the_trained_model

# we call the function in the main application code.
ML_model = load_ML_Model(ML_t_model_filename)
 


And now we will call our new_test_data() function and pass ML_model to see the prediction.

 # Function to load trained ML-Model

  
def new_test_data(ML_model):
    print('\n\n====================================================')
    print('---------  START PREDICTION  for New Data Set ---------')
    print('\n   In this function a new data set will be generated, ')
    print('  and a trained ML-Model for "mouse on the coordinate plane" ')
    print('  will be loaded from the disk. So we will not train the Model.')
    #print('  So we will not train the Model. ')
    #print('  will use the IF loops.')
    
    new_data_size = 1000 
    new_data_range = 100
    print('\n\n  The new data range is {}, and the new data size is {}.'.format(new_data_range,new_data_size))
    
    # generate new data 
    new_test_data1= []
    for x in range (new_data_size):
        new_test_data1.append([round(random.uniform(-new_data_range,new_data_range),2),round(random.uniform(-new_data_range,new_data_range),2)])
    
    print('\n  This is the prediction for the New Data set..\n')
    # Do prediction using ML_model.
    prediction = ML_model.predict(new_test_data1)
    cot = 0
    # check the predictions accuracy .
    for i in range (len(prediction)) :
        if prediction[i] =='Up_r':
          if ((new_test_data1[i][0]) > 0 and (new_test_data1[i][1]) > 0) :
            cot = cot + 1
        elif  prediction[i] =='Up_l':
          if ((new_test_data1[i][0])  0) :
            cot = cot + 1
        elif  prediction[i] =='D_r':
          if ((new_test_data1[i][0]) > 0 and (new_test_data1[i][1]) < 0) :
            cot = cot + 1
        elif  prediction[i] =='D_l':
          if ((new_test_data1[i][0]) < 0 and (new_test_data1[i][1]) < 0) :
            cot = cot + 1
        
    print('\n  We count {} correct prediction out of {} Instances.'.format(cot,(new_data_size)))
    print('\n  The Accuracy is:',round((cot/len(prediction))*100,3),'%')

 




Follow me on Twitter..




By: Ali Radwani




Python: Machine Learning – Part 2

December 1, 2019 1 comment


Learning :Python and Machine Learning Part 2
Subject: Requirements, Sample and Implementation

Machine Learning Implementation : In the previous post (Click to Read: Python and Machine Learning Part 1) we start to learn about the ML Machine Learning and we use the sklearn model with Iris data-set. In this post we will generate our own data-set and tray to pass it to the ML model and find-out if the result are satisfying our needs.

First of all let’s talk about the data we want to collect, since we are doing tests and we can’t do anything on the accuracy checking part, I will select a very easy data so we can make sure that IF our ML-model select the right labels. So I will write a function to generate numbers (two pairs) positives and negatives to present the mouse location on the coordinate plane and the labels will be:
Up_r = Up Right, Up_l= Up Left,
D_r= Down Right, D_l= Down Left
So we have (4) classes 20 Instances in each, that’s 80 Instances in total.

The data will be passed into get_test_train_data() function, and it will return train, test data and labels, then we will train the model using the train_data() function, after that we will run the model on the test data to see if the model succeed in predicting the correct labels.

In this post I will cover the function that will generate the data and converting the data set into object data-set so we can use it in sklearn model without changing our code in part-1. I will use same data-set names as in sklearn Iris data-set.

Also we will write some information or say summary about the data we have and classes. So let’s see this part first..


   ## Data Set Characteristics :::
      Creator: Ali Radwani 26/11/2019     
 
     Summary:
              This function will generate a dataset for Machine Learning for 
              test and learning purpose. Numeric x,y represent the position 
              of the mouse on the coordinate plane.
              Up_r = Up Right, Up_l= Up Left, D_r= Down Right, D_l= Down Left

     Number of Instances: 80 (20 in each of four (4) classes)
     Number of Attributes: 2 numeric (x,y), predictive attributes and the class.
     Attribute Information:
                 x (Position)
                 y (Position)
              class:
                 Up_r
                 Up_l
                 D_r
                 D_l


Once we create the data-set object we can append this information as description, adding descriptions to your data and applications is a good habit to learn and to have.

What is our data-set: From the summary part above we can see that we need to write a function to randomly generate two float number ranged from (-N) to (+N), N is our data_range. We assuming that these two numbers (pairs) are x, y of the mouse on the coordinate plane, so depending on each pairs (if it is negative or positive) we will add the corresponding class name, at the end we will have a list with tree values: x,y,label. Let’s see the code .

 # Function to generate data-set

  def data_set_generator():

      d_size = 400     # data-set size 
      d_range = 200    # Data-set range 
      data_list=[]
      nd1=[]

 # FOR loop to generate the random float numbers 
      for x in range (d_size  ):  
          nd1 =([round(random.uniform(-d_range,d_range),2),round(random.uniform(-d_range,d_range),2)])

 # Here we append the x,y pairs with labels.
          if nd1[0] > 0 and nd1[1] > 0 :
            data_list.append([nd1[0],nd1[1],'Up_r'])
          if nd1[0]  0 :
            data_list.append([nd1[0],nd1[1],'Up_l'])
          if nd1[0] > 0 and nd1[1] < 0 :
            data_list.append([nd1[0],nd1[1],'D_r'])
          if nd1[0] < 0 and nd1[1] < 0 :
            data_list.append([nd1[0],nd1[1],'D_l'])


 # We use shuffling the data-set to mix the data more
      for x in range (5):       # To mix the data
          random.shuffle(data_list)

      return data_list   # Return the data-set





During writing the Machine Learning ML code to use Iris data-set, the data itself, the labels and other parts was called as an object from the main data-set. So here we need to create several sets of our data then we append them all to-gather. First I will split the data into two sets, one for the data and one for the targets(labels).

 # Function to prepare data-set

def dataset_prepare(the_dataset):
      '''
      input: dataset
      The function will split the dataset into 2 sets, one for data (data_set)
      and one for labels (target_set)

      '''
      target_set = []
      data_set = []

      for x in range (len(the_dataset)) :
          data_set.append([the_dataset[x][0],the_dataset[x][1]])
          target_set.append([the_dataset[x][2]])

       return data_set, target_set
     


prepare data set


With above two functions we can now train our model and test it to see accuracy predictions. To make sure again that we can let our ML model to predict more new data-set, I create another function that will generate another set of data, I create this function to see try or say to be confident that YES the model is working. So let’s see the code. .

 # Function to create New dataset

def new_test_data():
    print( '\n\n====================================================' )
    print( '---------  START PREDICTION  for new data set ---------' )
    print( '\n  This is new data set, not the test one.. so there is ' )
    print( '  no labels to do comparing and to get the accuracy we ' )
    print( '  will use the IF loops.' )
    new_data_size = 5000    # data-set size 
    new_data_range = 300   # data-set range 
    print( '  The new data range is {}, and the new data size is {}.'.format( new_data_range, new_data_size ) )

    new_test_data1 = []
     # To generate the new data set.
    for x in range( new_data_size ):
        new_test_data1.append( [round( random.uniform( -new_data_range, new_data_range ), 2 ),
                                round( random.uniform( -new_data_range, new_data_range ), 2 )] )

    print( '\n\n  This is the prediction for the New Data set..\n' )

    prediction = clf.predict( new_test_data1 )
    cot = 0

    # Here we start counting the accuracy 
    for i in range( len( prediction ) ):

        if prediction[i] == 'Up_r':
            if ((new_test_data1[i][0]) > 0 and (new_test_data1[i][1]) > 0):
                cot = cot + 1
        elif prediction[i] == 'Up_l':
            if ((new_test_data1[i][0])  0):
                cot = cot + 1
        elif prediction[i] == 'D_r':
            if ((new_test_data1[i][0]) > 0 and (new_test_data1[i][1]) < 0):
                cot = cot + 1
        elif prediction[i] == 'D_l':
            if ((new_test_data1[i][0]) < 0 and (new_test_data1[i][1]) < 0):
                cot = cot + 1

    print( '\n  We count {} correct prediction out of {} Instances.'.format( cot, (new_data_size) ) )
    print( '\n  The Accuracy is:', round( (cot / len( prediction )) * 100, 3 ), '%' )
  


Wrapping-up: In this post we wrote a function to generate a data-set and split it into two parts one for training and one for testing. Then we test the model with fresh new data-set that been generated via another function. Here is a screenshot of the final result.



Follow me on Twitter..





Python: Machine Learning – Part 1

November 27, 2019 1 comment


Learning :Python and Machine Learning
Subject: Requirements, Sample and Implementation

Machine Learning: I will not go through definitions and uses of ML, I think there is a lot of other posts that may be more informative than whatever i will write. In this post I will write about my experience and learning carve to learn and implement ML model and test my own data.

The Story: Two, three days ago I start to read and watch videos about Machine Learning, I fond the “scklearn” site, from there I create the first ML to test an Iris data-set and then I wrote a function to generate data (my own random data) and test it with sklearn ML model.

Let’s start ..

Requirements:

1. Library to Import: To work with sklearn models and other functions that we will use, we need to import coming libraries:

import os # I will use it to clear the terminal.

import random # I will use it to generate my data-set.

import numpy as np

import bunch # To create data-set as object

from sklearn import datasets

from sklearn import svm

from sklearn import tree

from sklearn.model_selection import train_test_split as tts

2. Data-set: In my learning steps I use one of sklearn data-set named ” Iris” it store information about a flower called ‘Iris’. To use sklear ML Model on other data-sets, I create several functions to generate random data that can be passed into the ML, I will cover this part later in another post.
First we will see what is the Iris dataset, this part of information is copied from sklearn site.

::Iris dataset description ::
dataset type: Classification
contain: 3 classes, 50 Samples per class (Total of 150 sample)
4 Dimensionality
Features: real, positive

The data is Dictionary-like object, the interesting attributes are:
‘data’: the data to learn.
‘target’: the classification labels.
‘target_names’: the meaning of the labels.
‘feature_names’: the meaning of the features.
‘DESCR’: the full description of the dataset.
‘filename’: the physical location of iris csv.

Note: This part helps me to write me data-set generating function, that’s why we import the Bunch library to add lists to a data-set so it will appear as an object data-set, so the same code we use for Iris data-set will work fine with our data-set. In another post I will cover I will load the data from csv file and discover how to create a such file..

Start Writing the code parts: After I wrote the code and toned it, I create several functions to be called with other data-set and not hard-code any names in iris data-set. This way we can load other data-set in easy way.


The Code

 # import libraries 

import numpy as np
from sklearn import datasets
#from sklearn import svm
from sklearn import tree
from sklearn.model_selection import train_test_split as tts
import random, bunch


Next step we will load the iris dataset into a variable called “the_data”

 # loading the iris dataset. 

the_data = datasets.load_iris() 


From the above section “Iris dataset description” we fond that the data is stored in data, and the classification labels stored in target, so now we will store the data and the target in another two variables.

 # load the data into all_data, and target in all_labels. 
all_data= the_data.data 
all_labels = the_data.target   


We will create an object called ‘clf’ and will use the Decision Tree Classifier from sklearn.

 #  create Decision Tree Classifier 

clf = tree.DecisionTreeClassifier()


In Machine Learning programs, we need some data for training and another set of data for testing before we pass the original data or before we deploy our code for real data. The sklearn providing a way or say function to split a given data into two parts test and train. To do this part and to split the dataset into training and test I create a function that we will call and pass data and label set to it and it will return the following : train_data, test_data, train_labels, test_labels.

 #  Function to split a data-set into training and testing data. 

def get_test_train_data(data,labels):

  train_data, test_data, train_labels, test_labels = tts(data,labels,test_size = 0.1)
  return train_feats, test_feats, train_labels, test_labels


After splitting the data we will have four list or say data-sets, we will pass the train_data and the train_labels to the train_me() function, I create this function so we can pass the train_data, train_labels and it will call the (clf.fit) from sklearn. By finishing this part we have trained our ML Model and is ready to test a sample data. But first let’s see the train_me() function.

 #  Function train_me() will pass the train_data to sklearn Model. 

def train_me(train_data1,train_labels1):
  clf.fit(train_data1,train_labels1)
  print('\n The Model been trained. ')


As we just say, now we have a trained Model and ready for testing. To test the data set we will use the clf.predict function in sklearn, this should return a prediction labels list as the ML Model think that is right. To check if the predictions of the Model is correct or not also to have the percentage of correct answers we will count and compare the prediction labels with the actual labels in the test_data that we have. Here is the code for get_prediction()

 #  get_prediction() to predict the data labels. 

def get_prediction(new_data_set,test_labels2,accu):

  print('\n This is the prediction labels of the data.\n')

  # calling prediction function clf.predict
  prediction = clf.predict(new_data_set)
  print('\n prediction labels are : ',prediction,len(prediction))
  
  # print the Accuracy
  if accu == 't' :
    cot = 0
    for i in range (len(prediction)) :
      print(prediction[i] , new_data_set[i],test_labels2[i])
      if [prediction[i]] == test_labels2[i]:
        cot = cot + 1
    print('\ncount :',cot)
    print('\n The Accuracy:',(cot/len(prediction))*100,'%')


The accuracy value determine if we can use the model in a real life or tray to use other model. In the real data scenario, we need to pass ‘False’ flag for accu, because we can’t cross check the predicted result with any data, we can try to check manually for some result.

End of part 1: by now, we have all functions that we can use with our data-set, in coming images of the code and run-time screen we can see that we have a very high accuracy level so we can use our own data-set, and this will be in the coming post.

Result screen shot after running the Iris dataset showing high accuracy level.



Follow me on Twitter..





Python: Circle Packing

November 17, 2019 Leave a comment


Circle Packing Project
Subject: Draw, circles, Turtle

Definition: In geometry, circle packing is the study of the arrangement of circles on a given surface such that no overlapping occurs and so that all circles touch one another. Wikipedia

So, we have a canvas size (w,h) and we want to write a code to draw X number of circles in this area without any overlapping or intersecting between circles. We will write some functions to do this task, thous functions are:
1. c_draw (x1,y1,di): This function will take three arguments x1,y1 for circle position and di as circle diameter.

2. draw_fram(): This function will draw the frame on the screen, we set the frame_w and frame_h as variables in the setup area in the code.

3. c_generator (max_di): c_generator is the circles generating function, and takes one argument max_di presenting the maximum circles diameter. To generate a circle we will generate three random numbers for x position, y position and for circle diameter (max_di is the upper limit),also with each generating a while loop will make sure that the circle is inside the frame, if not regenerate another one.

4. can_we_draw_it (q1,di1): This is very important, to make sure that the circle is not overlapping with any other we need to use a function call (hypot) from math library hypot return the distance between two points, then if the distance between two circles is less than the total of there diameters then the two circles are not overlaps.



So, lets start coding …

First: the import and setup variables:


from turtle import *
import random
import math

# Create a turtle named t:
t =Turtle()
t.speed(0)
t.hideturtle()
t.setheading(0) 
t.pensize(0.5)
t.penup()

# frame size
frame_w = 500 
frame_h = 600 

di_list = [] # To hold the circles x,y and diameters


Now, Drawing the frame function:


def draw_fram () :

t.penup()

t.setheading(0)

t.goto(-frame_w/2,frame_h/2)

t.pendown()

t.forward(frame_w)

t.right(90)

t.forward(frame_h)

t.right(90)

t.forward(frame_w)

t.right(90)

t.forward(frame_h)

t.penup()

t.goto(0,0)


Now, Draw circle function:


def c_draw (x1,y1,di):

t.goto(x1,y1)

t.setheading(-90)

t.pendown()

t.circle(di)

t.penup()


This is Circles generator, we randomly select x,y and diameter then checks if it is in or out the canvas.


def c_generator (max_di):

falls_out_frame = True

while falls_out_frame :

x1 = random.randint(-(frame_w/2),(frame_w/2))

y1 = random.randint(-(frame_h/2),(frame_h/2))

di = random.randint(3,max_di)

# if true circle is in canvas

if (x1-di > ((frame_w/2)*-1)) and (x1-di < ((frame_w/2)-(di*2))) :

if (y1 ((frame_h/2)-(di))*-1) :

falls_out_frame = False

di_list.append([x1-di,y1,di])


With each new circle we need to check the distances and the diameter between new circle and all circles we have in the list, if there is an overlap then we delete the new circle data (using di_list.pop()) and generate a new circle. So to get the distances and sum of diameters we use this code ..

 # get circles distance

    cs_dis = math.hypot(((last_cx + last_cdi) - (c_n_list_x + c_n_list_di)) , (last_cy - c_n_list_y))
    di_total = last_cdi + c_n_list_di


To speed up the generation of right size of circles I use a method of counting the trying times of wrong sizes, that’s mean if the circles is not fit, and we pop it’s details from the circles list we count pops, if we reach certain number then we reduce the upper limits of random diameter of the new circles we generate. Say we start with max_di = 200, then if we pop for a number that divide by 30 (pop%30) then we reduce the max_di with (-1) and if we reach max_di less then 10 then max_di = 60. and we keep doing this until we draw 700 circles.


# if di_list pops x time then we reduce the randomization upper limits 
  if (total_pop % 30) == 0:
    max_di = max_di - 1
    if max_di < 10 :
      max_di = 60


Here are some output circles packing ..




With current output we reach the goal we are looking for, although there is some empty spaces, but if we increase the number of circles then there will be more time finding those area with random (x,y,di) generator, I am thinking in another version of this code that’s will cover:
1. Coloring the circles based on the diameter size.
2. A method to fill the spaces.



Follow me on Twitter..





Python: Numpay – P2

November 10, 2019 2 comments


Learning : Python Numpy – P2
Subject: Two Dimensional array and some basic commands

In real mathematics word we mostly using arrays with more than one dimensions, for example with two dimension array we can store a data as

So let’s start, if we want to create an array with 24 number in it starting from 0 to 23 we use the command np.range. as bellow :

 # We are using np.range to create an array of numbers between (0-23) 

m_array = np.arange(24)
print(m_array)
[Output]: 
[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
 


And if we want the array to be in a range with certain incriminating amount we may use this command:

 # Create array between 2-3 with 0.1 interval 

m_array = np.arange(2, 3, 0.1)
print(m_array)
[Output]: 
[ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]


Now if we want to create an array say 3×3 fill with random numbers from (0-10) we use random function in numpy as bellow:

 # create 3x3 Array with random numbers 0-10 
m_array = np.random.randint(10, size=(3,3))
print(m_array)
[Output]: 
[[6 0 7]
 [1 9 8]
 [5 8 9]] 



And if we want the random number ranges to be between two numbers we use this command:

# Array 3x3 random values between (10-60)
m_array = np.random.randint(10,60, size=(3,3))
[Output]: 
[[11 23 50]
 [36 44 18]
 [56 24 30]] 


If we want to reshape the array; say from 4×5 (20 element in the array) we can reshape it but with any 20-element size. Here is the code:

# To crate a randome numbers in an array of 4x5 and numbers range 10-60.
m_array = np.random.randint(10,60, size=(4,5))
print(m_array)

# We will reshape the 4x5 to 2x10
new_shape = m_array.reshape(2,10)
print ('\n   Tne new 2x10 array:\n',new_shape)
[Output]:
[[37 11 56 18 42]
 [17 12 22 16 42]
 [47 29 17 47 35]
 [49 55 43 13 11]]

Tne new 2x10 array:
[[37 11 56 18 42 17 12 22 16 42]
 [47 29 17 47 35 49 55 43 13 11]]


Also we can convert a list to an array,

# Convert a list l=([2,4,6,8]) to a 1D array
# l is a list with [2,4,6,8] values.
l=([2,4,6,8])
print('  l= ',l)
# Convert it to a 1D array.
ar = np.array(l)
print('\n  Type of l:',type(l),', Type of ar:',type(ar))
print('  ar = ',ar)

[Output]:
l=  [2, 4, 6, 8] 
Type of: class'list'  , Type of ar: class 'numpy.ndarray'
ar =  [2 4 6 8]


If we want to add a value to all elements in the array, we just write:

# Adding 9 to each element in the array

 
print('ar:',ar)
ar = ar + 9
print('ar after adding 9:',ar)

[Output]:
ar:  [2 4 6 8]
ar after adding 9: [11 13 15 17]


:: numpy Commands::

Command Comments and Outputs
my_array = np.array([1,2,3,4,5]) Create an array with 1 to 5 integer
len(my_array) Get the array length
np.sum(my_array) get the sum of the elements in the array

my_array = np.array([1,2,3,4,5])
print(np.sum(my_array))
[Output]: 15
np.max(my_array) # Get the maximum number in the array
my_array = np.array([1, 2, 3,4,5])
max_num = np.max(my_array)
[Output]: 5
np.min(my_array) # Get the minimum number in the array
my_array = np.array([1, 2, 3,4,5])
min_num = np.min(my_array)
[Output]: 1
my_array = np.ones(5)
Output: [ 1., 1., 1., 1., 1.]
create array of 1s (of length 5)
np.ones(5)
Output: [ 1., 1., 1., 1., 1.]
m_array = np.arange(24)
print(m_array)
# To create an array with 23 number.
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
m_array = np.arange(2, 3, 0.1)
print(m_array)
# Create an array from 2 to 3 with 0.1 interval value increments.
[ 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]
m_array = np.random.randint(10, size=(3,3))
print(m_array)
# Create a 3×3 array with random numbers between (0,10)
[[6 0 7]
[1 9 8]
[5 8 9]]
m_array = np.random.randint(10,60, size=(3,3)) # Create a 3×3 array with random numbers between (10,60)
[[11 23 50]
[36 44 18]
[56 24 30]]
# Create a 4×5 array with random numbers.
m_array = np.random.randint(10,60, size=(4,5))

# Reshape m_array from 4×5 to 2×10
new_shape = m_array.reshape(2,10)
print(m_array)
print(new_shape)

# m_array 4×5
[[37 11 56 18 42]
[17 12 22 16 42]
[47 29 17 47 35]
[49 55 43 13 11]]

# Tne new 2×10 array:
[[37 11 56 18 42 17 12 22 16 42]
[47 29 17 47 35 49 55 43 13 11]]

# convert a list to array:
l=[2,4,6,8]
ar = np.array(l)
# check data type for l and ar:
print(‘\n Type of l:’,type(l),’, Type of ar:’,type(ar))
[Output]:
l = [2, 4, 6, 8]
ar = [2, 4, 6, 8]
Type of l: class ‘list,’, Type of ar: class ‘numpy.ndarray’
# Adding 9 to each element in the array
ar = ar + 9
[11 13 15 17]


:: numpy Sessions ::

Sessions 1 Sessions 2 Sessions 3 Sessions 4




:: Some Code output ::

Create array with 24 numbers (0-23).
Reshape array to 4×6.
Create random array of numbers (0-10), size 3×3.
Reshape 4×5 array to 2×10.
Convert list to array.



Follow me on Twitter..





Python: Numpay – P1

November 7, 2019 3 comments


Learning : Python Numpy – P1
Subject: Numpay and some basic commands

In coming several posts I will talk about the numpay library and how to use some of its functions. So first what is numpy? Definition: NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. Also known as powerful package for scientific computing and data manipulation in python. As any library or package in python we need to install it on our device (we will not go through this process)

Basic commands in numpy: First of all we need to import it in our code. so we will use

  import numpy as np


To create a 1 dimensional array we can use verey easy way as:

  # create an array using numpy array function.
my_array = np.array([1, 2, 3,4,5])


Later we will create a random array of numbers in a range.

Now, to get the length of the array we can use len command as


len(my_array)
Output: 5


To get the sum of all elements in the array we use..

np.sum(my_array)


And to get the maximum and minimum numbers in the array we use ..

 # Get the maximum and minimum numbers in the array
my_array = np.array([1, 2, 3,4,5])
np.max(my_array)
[Output]: 5 

np.min(my_array)
[Output]: 1 


Some time we may need to create an array with certain Number of elements only one’s, to do this we can use this commands:

#create array of 1s (of length 5) 
np.ones(5)
Output: [ 1.,  1.,  1.,  1.,  1.]


The default data type will be float, if we want to change it we need to pass the the ‘dtype’ to the command like this :

#create array of 1s (of length 5) as integer: 
np.ones(5, dtype = np.int)
Output: [ 1,  1,  1,  1,  1]


Code output:



So far we work on a one dimensional array, in the next post we will cover some commands that will help us in the arrays with multiple dimensions.



:: numpy Commands::

Command comment
my_array = np.array([1,2,3,4,5]) Create an array with 1 to 5 integer
len(my_array) Get the array length
np.sum(my_array) get the sum of the elements in the array

my_array = np.array([1,2,3,4,5])
print(np.sum(my_array))
[Output]: 15
np.max(my_array) # Get the maximum number in the array
my_array = np.array([1, 2, 3,4,5])
max_num = np.max(my_array)
[Output]: 5
np.min(my_array) # Get the minimum number in the array
my_array = np.array([1, 2, 3,4,5])
min_num = np.min(my_array)
[Output]: 1
my_array = np.ones(5)
Output: [ 1., 1., 1., 1., 1.]
create array of 1s (of length 5)
np.ones(5)
Output: [ 1., 1., 1., 1., 1.]


:: numpy Sessions ::

Sessions 1 Sessions 2 Sessions 3 Sessions 4



Follow me on Twitter..





Python and Lindenmayer System – P2

November 3, 2019 1 comment


Learning : Lindenmayer System P2
Subject: Drawing with python using L-System

In the first part of Lindenmayer System L-System post (Click to Read) we had wrote two functions: one to generate the pattern based on the variables and roles, and one to draw lines and rotate based on the pattern we have.

In this part I will post images of what Art we can generate from L-System
the codes will be the L-system that generate the patterns, so the code will include: the Rules, Angle (Right, Left) Iteration and Starting Variable.


L-System: Koch Curve

L-System: Minkowski Sausage

L-System: … but here the Iteration is: 3

L-System: Again … but here the Iteration is: 3

L-System: Square Sierpinski

L-System: Sierpinski Arrowhead.

L-system: Dragon Curve

L-System: Koch Snowflake

L-System:

L-System:


The possibilities to generate the putters and therefore drawing the output is endless, any slightly changes in the iterations or rotation (+ -) angles will take all output to a new levels. In the coming post, I will use the L-system to generate fractal tree and see what we can get from there.



Follow me on Twitter..