Saturday, February 25, 2017

Biomimicry Global Design Challenge: Food Security Indicators

Biomimicry Global Design Challenge 2017

This project is what I researched over the summer. Instead of just looking at the generated formula, I am going to collect data from the legitimate websites and try to plug those numbers into the formula. After plugging those numbers into the formula, I will figure out how much food is needed by 2030. 
I will predict the number for 2030, and plug them into the formula.

Top three industries that produce the most CO2:
1. Fossil Fuel Combustion
2. Transportation
3. Heat and Electricity.
Abstract:
There are approximately 200 definitions, and 450 indicators of food security. One of the most precise definitions of food security is "Community food security exists when all citizens obtain a safe, personally acceptable, nutritious diet through a sustainable food system that maximizes healthy choices, community self reliance and equal access for everyone.”(1) Food security can be assessed by precipitation data, food balance sheet, food market survey, and crop production.
China should be more concerned about food security because of its constant increase in population. 1.4 billion out of 7.3 billion people live in China. Since there is a lot of pollution being produced in china, the temperature is increasing, CO2 emission is increasing, crop production is being affected by it. Through MATLAB, we found the correlation of coefficient to predict the crop yield for the future. PDSI stands for Palmer Drought Severity Index.
PDSI has a standardized scale that is used to determine the severity of the region. It ranges from -4 to +4, towards -4 meaning dry condition, and towards +4 meaning wet condition. “Drought is a deficiency in precipitation over an extended period, usually a season or more, resulting in a water shortage causing adverse impacts on vegetation, animals, and/or people.”(8)






Introduction:
Every country in the world is known for the major crops it produces. In this project, the main focus is on China’s six main crops: wheat, rice, sorghum, soybean, maize, and barley. Climate Change has a huge impact on crop production. Food Security is the main focus for the present world and for our future generations. The world population has increased drastically in the last few years and is expected to reach 9 billion by 2050. There are numerous people in the world who go to bed without eating anything, not because there isn’t enough food for them, but it’s because the food is not accessible to them, and there is too much competition in the market world.
Crop Production in China has increased exponentially from 1961-2011. Along with the increase in crop production, temperature has also increased to a level that can impact the entire globe instantly. Regions inadequate for farming might be able to grow crops in the future if the temperature keeps increasing. Most farmers are affected by the crop production because until the mid 1990s, China’s agriculture required the farmers to pay taxes. Because China is overpopulated, food security is a big concern for China. Consumers spend a large portion of their income on food, rural people earn half the amount from farming, and if China started importing more food, it will affect the entire world, especially countries living in poverty.
Climate change has an impact everything, especially temperature, crop production, CO2 emission and precipitation level. There is a huge misconception about CO2 emission. Many assume that CO2 emission is unhealthy for the crop production, but instead CO2 emission is an enrichment for the crops. China suffers from severe drought every year. Droughts have had a huge impact on the crop production.












Methodology :
Precipitation, temperature, technology enhancement, and CO2 emission data were acquired from FAO database 1961-2012 for China. The data started from 1961 because the FAO website started recording yield data from 1961. Climate change is impacting the temperature, precipitation, CO2 emission, and technology.
Furthermore, this affects food security. To see the relationship between all the variables, we used MATLAB, and found the coefficient correlation. ArcMap was used to determine NDVI average data from the NASA website NDVI stands for Normalized Difference Vegetation Index. “The Normalized Difference Vegetation Index (NDVI) is a numerical indicator that uses the visible and near-infrared bands of the electromagnetic spectrum, and is adopted to analyze remote sensing measurements and assess whether the target being observed contains live green vegetation or not.(3)” We relied on remote sensing for our data.


method.png


Generated Formula:


Log(yield)= a + b1*Precipitation + b2*Temperature + b3*CO2 + b4*Technology


Data:


Precipitation Data was gathered from World Bank(4). Average CO2 Emission was collected from the NOAA website(8). ENSO data was obtained from the World Bank(4). Crop Yield for all the main crops was gathered from FAO website(7). To see a change in technology, we made a vector using the years 1961-2012. Average NDVI data was obtained from the NASA website(9).


1st.png
1990: 255
1995: 360
2000: 370
2005: 380
2010: 390
2015: 400
2020: 410
2025: 420
2030: 430


2nd.png
1990:
1995:
2000:
2005:
2010:
2015:
2020:
2025:
2030: 49.5
3rd.png
1990:7
1995:6.8
2000:6.8
2005:7
2010:7
2015:
2020:
2025:
2030: 8
4th.png
Maize:
1990: 46,000
1995: 50,000
2000: 48,000
2005: 56,000
2010: 54,000
2015: 56,000
2020: 55,000
2025: 58,000
2030: 65,000

Soybeans:
1990: 14,000
1995: 18,000
2000: 17,000
2005: 17,000
2010: 19,000
2015: 20,000
2020: 19,000
2025: 20,000
2030: 21,000

Rice:
1990: 58,000
1995: 61,000
2000: 61,000
2005: 61,000
2010: 66,000
2015: 72,000
2020: 74,000
2025: 76,000
2030: 80,000

Sorghum:
1990: 38,000
1995: 30,000
2000: 30,000
2005: 45,000
2010: 40,000
2015: 46,000
2020: 43,000
2025: 50,000
2030: 52,000

Barley:
1990: 29,000
1995: 27,000
2000: 25,000
2005: 40,000
2010: 39,000
2015: 40,000
2020: 41,000
2025: 41,000
2030: 46,000

Wheat:
2030: 70,000

Results:
The graph below illustrates that a drought has an immediate effect on the crop production of China. 0 Month Lag had the strongest relationship compared to the 1, 2 and 3 Month Lag. The correlation between NDVI and PDSI in 0 Month Lag was 0.688, 1 Month Lag was 0.525, 2 Month Lag was 0.219, and finally 3 Month Lag was -0.120. Looking at figures 1,2,3, and 4 precipitation, CO2 emission, temperature, and crop yield in the graphs above, it is clear that they all go up and down, but mostly increase from 1961-2012. The coefficients of the crop yield equation are the following: b1= -0.004764442104632809, b2= 0.006878378588643539, b3=0.002832575142712707, and b4= 0.001118322107259302.

Log(yield)= a + b1*Precipitation + b2*Temperature + b3*CO2 + b4*Technology
2030 log(yield) = (-0.004764442104632809)*(49.5) + (0.006878378588643539)*(8) + (0.002832575142712707)*(430) + (0.001118322107259302)*() + a


2016 log(yield) = (-0.004764442104632809)*(50) + (0.006878378588643539)*(7) + (0.002832575142712707)*(400) + (0.001118322107259302)*() + a

final11.png














Procedure to get the NDVI averages.


map.png


Conclusion:


To conclude, based on the relationship between NDVI and PDSI, it is quite obvious that crops are healthy when there are less droughts. When a drought occurs, it has an immediate effect on the crops of China. When the NDVI number is high, the PSDI number is also high. Besides that relationship, through MATLAB we also found a strong relationship between precipitation, CO2 emission, temperature, technology, and crop production. We developed an equation to predict our future crop yield based on the temperature, precipitation, CO2 emission, and technology.
In order to feed its population, China should be producing more crop yield, and this research could be enhanced in the future by comparing more than six crop major crops, and more years could be used for the NDVI averages to look at the vegetation health of crops in China.

Wednesday, February 1, 2017

Progress 01.09.17



First I started off with importing the Times Square Subway with almost all the trains, and the 59th street station trains images onto MATLAB.






















This plate displays numerous colors, but in order to make it easier for the camera to capture them, I have to pick up the colors individually.

First I started off with changing the color of the whole image. Using the following MATLAB code, I turned the whole image to gray.

>> rgb = imread('NQR345.png');
>> figure
>> imshow(rgb)
>> gray_image = rgb2gray(rbg);
imshow(gray_image);


After, I was flexible with changes the colors around, I tired to make the change it to black and white. Since there are so many other letter and numbers on the plate itself, we need to detect the area we have to focus on. So using MATLAB, I was able to make the image black and white and able to capture NQR456.

The code:
>> center6 = center(1:6,:);
>> radius  = radii(1:6);
>> metric6 = metric(1:6);
>> viscircles(center6, radii6, 'EdgeColor', 'r');

**The 6 in the code above could be named anything else, for instance, train.

So far I been successful at taking the colors away, now it's time to detect colors. Any colors I want.
MATLAB has an app called Color Thresholding App designed to mix the color or separate the colors. I played around with it using the Times Square Subway image.
The polynomial to the right in the image helps you separate or join the colors. So in the image above, I took away all the colors expect for red and Orange. It even took away the white letters and numbers. We can still see the numbers 1, 2, 3, and letters N, Q, R, and W because initially they were white, and the background is all black.



This is another example of Color Thresholding. In this image I made the polynomial in a way so that we can see all the trains in one look. Although, it;s not completely clear because we can still see some of the white letters,


To take this to the next level, using MATLAB, I took all the colors away, and took away all the numbers and letters away except for 1, 2, and 3. The code to get this answer is the following.

%image = imread('/Users/admin/Pictures/subwaysignall.jpg');
image = imread('subwaysign.jpg');
J = imresize(I, 0.5);
figure
imshow(image)
title('Original Image')
figure
imshow(J)
title('Resized Image')

%123
%image = imread('subwaysignall.jpg');
figure(1), imshow(image), title('Original');
%image = im2double(image);
%[r c p] = size(image);


%imageR = squeeze(image(:,:,1));
%imageG = squeeze(image(:,:,2));
%imageB = squeeze(image(:,:,3));

%imageBWR = im2bw(imageR, graythresh(imageR));
%imageBWG = im2bw(imageG, graythresh(imageG));
%imageBWB = im2bw(imageB, graythresh(imageB));
%imageBW = imcomplement(imageBWR&imageBWG&imageBWB);
%figure(2), imshow(imageBWR); title('Color Thresholded');
%figure(3), imshow(imageBWG); title('Color Thresholded');
%figure(4), imshow(imageBWB); title('Color Thresholded');
%figure(5), imshow(imageBW); title('Color Thresholded');

red = @createMask;
imageBWRed = red(image);
figure(6), imshow(imageBWRed); title('Red Train');

imageBWRed = bwmorph(imageBWRed,'clean')

% ocrAnn = @insertOCRAnnotation;
ocrEval = @evaluateOCRTraining;
[ocrimageBWRed, results] = ocrEval(imageBWRed);
figure(7), imshow(ocrimageBWRed); title('OCR Red Train');

results.Text;
text = results.Text;
words = ['say ' '-v' 'Victoria' ' subway train ' text(1:1) ' ' text(2:2) ' ' text(3:3)];
%system('say -v "Victoria" words');
system(words);
results.CharacterConfidences;

function [BW,maskedRGBImage] = createMask(RGB)
%createMask  Threshold RGB image using auto-generated code from colorThresholder app.
%  [BW,MASKEDRGBIMAGE] = createMask(RGB) thresholds image RGB using
%  auto-generated code from the colorThresholder App. The colorspace and
%  minimum/maximum values for each channel of the colorspace were set in the
%  App and result in a binary mask BW and a composite image maskedRGBImage,
%  which shows the original RGB image values under the mask BW.

% Auto-generated by colorThresholder app on 03-Dec-2016
%------------------------------------------------------

% Convert RGB image to chosen color space
I = rgb2hsv(RGB);

% Define thresholds for channel 1 based on histogram settings
channel1Min = 0.917;
channel1Max = 0.045;

% Define thresholds for channel 2 based on histogram settings
channel2Min = 0.000;
channel2Max = 1.000;

% Define thresholds for channel 3 based on histogram settings
channel3Min = 0.000;
channel3Max = 1.000;

% Create mask based on chosen histogram thresholds
sliderBW = ( (I(:,:,1) >= channel1Min) | (I(:,:,1) <= channel1Max) ) & ...
    (I(:,:,2) >= channel2Min ) & (I(:,:,2) <= channel2Max) & ...
    (I(:,:,3) >= channel3Min ) & (I(:,:,3) <= channel3Max);

% Create mask based on selected regions of interest on point cloud projection
I = double(I);
[m,n,~] = size(I);
polyBW = false([m,n]);
I = reshape(I,[m*n 3]);

% Convert HSV color space to canonical coordinates
Xcoord = I(:,2).*I(:,3).*cos(2*pi*I(:,1));
Ycoord = I(:,2).*I(:,3).*sin(2*pi*I(:,1));
I(:,1) = Xcoord;
I(:,2) = Ycoord;
clear Xcoord Ycoord

% Project 3D data into 2D projected view from current camera view point within app
J = rotateColorSpace(I);

% Apply polygons drawn on point cloud in app
polyBW = applyPolygons(J,polyBW);

% Combine both masks
BW = sliderBW & polyBW;

% Initialize output masked image based on input image.
maskedRGBImage = RGB;

% Set background pixels where BW is false to zero.
maskedRGBImage(repmat(~BW,[1 1 3])) = 0;

end

function J = rotateColorSpace(I)

% Translate the data to the mean of the current image within app
shiftVec = [0.035456 0.001129 0.241630];
I = I - shiftVec;
I = [I ones(size(I,1),1)]';

% Apply transformation matrix
tMat = [-0.488370 -0.361528 0.000000 0.683321;
    0.012842 -0.028299 0.658630 -0.491234;
    0.282769 -0.623110 -0.029912 8.146246;
    0.000000 0.000000 0.000000 1.000000];

J = (tMat*I)';
end

function polyBW = applyPolygons(J,polyBW)

% Define each manually generated ROI
hPoints(1).data = [0.191929 0.018162;
    0.548562 -0.007794;
    0.502281 -0.306294;
    0.257266 -0.341984];

% Iteratively apply each ROI
for ii = 1:length(hPoints)
    if size(hPoints(ii).data,1) > 2
        in = inpolygon(J(:,1),J(:,2),hPoints(ii).data(:,1),hPoints(ii).data(:,2));
        in = reshape(in,size(polyBW));
        polyBW = polyBW | in;
    end
end

end

function [ocrI, results] = evaluateOCRTraining(I, roi)

% Location of trained OCR language data
trainedLanguage = '/Users/admin/Documents/MATLAB/myLang/tessdata/myLang.traineddata';

% Run OCR using trained language. You may need to modify OCR parameters or
% pre-process your test images for optimal results. Also, consider
% specifying an ROI input to OCR in case your images have a lot of non-text
% background.
layout = 'Block';
if nargin == 2
    results = ocr(I, roi, ...
        'Language', trainedLanguage, ...
        'TextLayout', layout);
else
    results = ocr(I, ...
        'Language', trainedLanguage, ...
        'TextLayout', layout);
end
ocrI = insertOCRAnnotation(I, results);
end
%--------------------------------------------------------------------------
% Annotate I with OCR results.
%--------------------------------------------------------------------------
function J = insertOCRAnnotation(I, results)
text = results.Text;

I = im2uint8(I);
if isempty(deblank(text))
    % Text not recognized.
    text = 'Unable to recognize any text.';
    [M,N,~] = size(I);
    J = insertText(I, [N/2 M/2], text, ...
        'AnchorPoint''Center''FontSize', 24, 'Font''Arial Unicode');
    
else
    location = results.CharacterBoundingBoxes;
    
    % Remove new lines from results.
    newlines = text == char(10);
    text(newlines) = [];
    location(newlines, :) = [];
    
    % Remove spaces from results
    spaces = isspace(text);
    text(spaces) = [];
    location(spaces, :) = [];
    
    % Convert text array into cell array of strings.
    text = num2cell(text);
    
    % Pad the image to help annotate results close to the image border.
    I = padarray(I, [50 50], uint8(255));
    location(:,1:2) = location(:,1:2) + 50;
    
    % Insert text annotations.
    J  = insertObjectAnnotation(I, 'rectangle', location, text);
end
end

As we can see in the image, all the colors have vanished, and we are only left with what we want, 1, 2, 3, in black and white. By using the color thresholding app, I inputed the function to the code. Using the same method, I used the function for the blue line (A, C, E) from the color thresholding app, and the image gave me A, C, E in black and white.



Code for the image above:
%image = imread('/Users/admin/Pictures/subwaysignall.jpg');
image = imread('subwaysign.jpg');
%image = imread('subwaysignall.jpg');
figure(1), imshow(image), title('Original');
%image = im2double(image);
%[r c p] = size(image);

%imageR = squeeze(image(:,:,1));
%imageG = squeeze(image(:,:,2));
%imageB = squeeze(image(:,:,3));

%imageBWR = im2bw(imageR, graythresh(imageR));
%imageBWG = im2bw(imageG, graythresh(imageG));
%imageBWB = im2bw(imageB, graythresh(imageB));
%imageBW = imcomplement(imageBWR&imageBWG&imageBWB);
%figure(2), imshow(imageBWR); title('Color Thresholded');
%figure(3), imshow(imageBWG); title('Color Thresholded');
%figure(4), imshow(imageBWB); title('Color Thresholded');
%figure(5), imshow(imageBW); title('Color Thresholded');

red = @createMask;
imageBWRed = red(image);
figure(6), imshow(imageBWRed); title('Red Train');

imageBWRed = bwmorph(imageBWRed,'clean')

% ocrAnn = @insertOCRAnnotation;
ocrEval = @evaluateOCRTraining;
[ocrimageBWRed, results] = ocrEval(imageBWRed);
figure(7), imshow(ocrimageBWRed); title('OCR Red Train');

results.Text;
text = results.Text;
words = ['say ' '-v' 'Victoria' ' subway train ' text(1:1) ' ' text(2:2) ' ' text(3:3)];
%system('say -v "Victoria" words');
system(words);
results.CharacterConfidences;

function [BW,maskedRGBImage] = createMask(RGB)
%createMask  Threshold RGB image using auto-generated code from colorThresholder app.
%  [BW,MASKEDRGBIMAGE] = createMask(RGB) thresholds image RGB using
%  auto-generated code from the colorThresholder App. The colorspace and
%  minimum/maximum values for each channel of the colorspace were set in the
%  App and result in a binary mask BW and a composite image maskedRGBImage,
%  which shows the original RGB image values under the mask BW.

% Auto-generated by colorThresholder app on 03-Dec-2016
%------------------------------------------------------

% Convert RGB image to chosen color space
I = rgb2hsv(RGB);

% Define thresholds for channel 1 based on histogram settings
channel1Min = 0.917;
channel1Max = 0.045;

% Define thresholds for channel 2 based on histogram settings
channel2Min = 0.000;
channel2Max = 1.000;

% Define thresholds for channel 3 based on histogram settings
channel3Min = 0.000;
channel3Max = 1.000;

% Create mask based on chosen histogram thresholds
sliderBW = ( (I(:,:,1) >= channel1Min) | (I(:,:,1) <= channel1Max) ) & ...
    (I(:,:,2) >= channel2Min ) & (I(:,:,2) <= channel2Max) & ...
    (I(:,:,3) >= channel3Min ) & (I(:,:,3) <= channel3Max);

% Create mask based on selected regions of interest on point cloud projection
I = double(I);
[m,n,~] = size(I);
polyBW = false([m,n]);
I = reshape(I,[m*n 3]);

% Convert HSV color space to canonical coordinates
Xcoord = I(:,2).*I(:,3).*cos(2*pi*I(:,1));
Ycoord = I(:,2).*I(:,3).*sin(2*pi*I(:,1));
I(:,1) = Xcoord;
I(:,2) = Ycoord;
clear Xcoord Ycoord

% Project 3D data into 2D projected view from current camera view point within app
J = rotateColorSpace(I);

% Apply polygons drawn on point cloud in app
polyBW = applyPolygons(J,polyBW);

% Combine both masks
BW = sliderBW & polyBW;

% Initialize output masked image based on input image.
maskedRGBImage = RGB;

% Set background pixels where BW is false to zero.
maskedRGBImage(repmat(~BW,[1 1 3])) = 0;

end

function J = rotateColorSpace(I)

% Translate the data to the mean of the current image within app
shiftVec = [0.035456 0.001129 0.241630];
I = I - shiftVec;
I = [I ones(size(I,1),1)]';

% Apply transformation matrix
tMat = [-0.488370 -0.361528 0.000000 0.683321;
    0.012842 -0.028299 0.658630 -0.491234;
    0.282769 -0.623110 -0.029912 8.146246;
    0.000000 0.000000 0.000000 1.000000];

J = (tMat*I)';
end

function polyBW = applyPolygons(J,polyBW)

% Define each manually generated ROI
hPoints(1).data = [0.191929 0.018162;
    0.548562 -0.007794;
    0.502281 -0.306294;
    0.257266 -0.341984];

% Iteratively apply each ROI
for ii = 1:length(hPoints)
    if size(hPoints(ii).data,1) > 2
        in = inpolygon(J(:,1),J(:,2),hPoints(ii).data(:,1),hPoints(ii).data(:,2));
        in = reshape(in,size(polyBW));
        polyBW = polyBW | in;
    end
end

end

function [ocrI, results] = evaluateOCRTraining(I, roi)

% Location of trained OCR language data
trainedLanguage = '/Users/admin/Documents/MATLAB/myLang/tessdata/myLang.traineddata';

% Run OCR using trained language. You may need to modify OCR parameters or
% pre-process your test images for optimal results. Also, consider
% specifying an ROI input to OCR in case your images have a lot of non-text
% background.
layout = 'Block';
if nargin == 2
    results = ocr(I, roi, ...
        'Language', trainedLanguage, ...
        'TextLayout', layout);
else
    results = ocr(I, ...
        'Language', trainedLanguage, ...
        'TextLayout', layout);
end
ocrI = insertOCRAnnotation(I, results);
end
%--------------------------------------------------------------------------
% Annotate I with OCR results.
%--------------------------------------------------------------------------
function J = insertOCRAnnotation(I, results)
text = results.Text;

I = im2uint8(I);
if isempty(deblank(text))
    % Text not recognized.
    text = 'Unable to recognize any text.';
    [M,N,~] = size(I);
    J = insertText(I, [N/2 M/2], text, ...
        'AnchorPoint', 'Center', 'FontSize', 24, 'Font', 'Arial Unicode');
    
else
    location = results.CharacterBoundingBoxes;
    
    % Remove new lines from results.
    newlines = text == char(10);
    text(newlines) = [];
    location(newlines, :) = [];
    
    % Remove spaces from results
    spaces = isspace(text);
    text(spaces) = [];
    location(spaces, :) = [];
    
    % Convert text array into cell array of strings.
    text = num2cell(text);
    
    % Pad the image to help annotate results close to the image border.
    I = padarray(I, [50 50], uint8(255));
    location(:,1:2) = location(:,1:2) + 50;
    
    % Insert text annotations.
    J  = insertObjectAnnotation(I, 'rectangle', location, text);
end
end




This is just the Red Line. Using different functions, the same tactic can be implied to all the lines that exist in the New York City Subway. I did the same thing with A,C,E,4,5,6,N,Q,R,W,7, and S.
Now, the next step is to actually make this useful for a visually impaired individual. In order for us to make this useful, this image above will actually have to speak to us. So the following code will actually say the subway train 1, 2, 3, N, R, Q.


MATLAB was not able to give the results I had expected. The problem was that the memory of the macs was not enough. Most of the laptops had up to 4GB only.
Dell:
04: 4GB
09: 4GB
01: 4 GB

Mac: 
A12: 4GB
07: 4 GB
B2: 4 GB
A10: 4GB
B1: 4GB
B9: 4GB

The individual functions can be very tedious, and inconvenient, it's best if I combine all of them so that one function will do the same job as five functions.
And the same thing, with the video one. Taking all the different functions, and combining it into one saves so much time.

The most convenient way to make all the trains, the whole board rather black and white, using the color thresholding app, you can change the colors in any way you want. If you select the color properly, you can have a perfect image.

The problem is that I have to get rid of the little unnecessary dots. The little white pieces can be taken out by erosion and dilation in MATLAB.
Now I want to work on video for the whole board.
I used the tool box to clean up the image of all the trains.

Code for the image above:
image = imread('subwaysign.jpg');
figure(1), imshow(image), title('Original');


red = @createMask;
imageBWRed = red(image);
figure(6), imshow(imageBWRed); title('Red Train');

imageBWRed = bwmorph(imageBWRed,'clean')

% ocrAnn = @insertOCRAnnotation;
ocrEval = @evaluateOCRTraining;
[ocrimageBWRed, results] = ocrEval(imageBWRed);
figure(7), imshow(ocrimageBWRed); title('OCR Red Train');

results.Text;
text = results.Text;
words = ['say ' '-v' 'Victoria' ' subway train ' text(1:1) ' ' text(2:2) ' ' text(3:3)];
%system('say -v "Victoria" words');
system(words);
results.CharacterConfidences;




function [BW,maskedRGBImage] = createMask(RGB)
%createMask  Threshold RGB image using auto-generated code from colorThresholder app.
%  [BW,MASKEDRGBIMAGE] = createMask(RGB) thresholds image RGB using
%  auto-generated code from the colorThresholder App. The colorspace and
%  minimum/maximum values for each channel of the colorspace were set in the
%  App and result in a binary mask BW and a composite image maskedRGBImage,
%  which shows the original RGB image values under the mask BW.

% Auto-generated by colorThresholder app on 22-Mar-2017
%------------------------------------------------------


% Convert RGB image to chosen color space
I = RGB;

% Define thresholds for channel 1 based on histogram settings
channel1Min = 0.000;
channel1Max = 255.000;

% Define thresholds for channel 2 based on histogram settings
channel2Min = 0.000;
channel2Max = 255.000;

% Define thresholds for channel 3 based on histogram settings
channel3Min = 0.000;
channel3Max = 255.000;

% Create mask based on chosen histogram thresholds
sliderBW = (I(:,:,1) >= channel1Min ) & (I(:,:,1) <= channel1Max) & ...
    (I(:,:,2) >= channel2Min ) & (I(:,:,2) <= channel2Max) & ...
    (I(:,:,3) >= channel3Min ) & (I(:,:,3) <= channel3Max);

% Create mask based on selected regions of interest on point cloud projection
I = double(I);
[m,n,~] = size(I);
polyBW = false([m,n]);
I = reshape(I,[m*n 3]);

% Project 3D data into 2D projected view from current camera view point within app
J = rotateColorSpace(I);

% Apply polygons drawn on point cloud in app
polyBW = applyPolygons(J,polyBW);

% Combine both masks
BW = sliderBW & polyBW;

% Initialize output masked image based on input image.
maskedRGBImage = RGB;

% Set background pixels where BW is false to zero.
maskedRGBImage(repmat(~BW,[1 1 3])) = 0;

end

function J = rotateColorSpace(I)

% Translate the data to the mean of the current image within app
shiftVec = [56.132587 47.697461 47.632722];
I = I - shiftVec;
I = [I ones(size(I,1),1)]';

% Apply transformation matrix
tMat = [-0.002278 0.001021 0.000000 0.241278;
    0.000002 0.000004 0.002411 -0.501110;
    -0.001065 -0.002185 0.000004 9.324087;
    0.000000 0.000000 0.000000 1.000000];

J = (tMat*I)';
end

function polyBW = applyPolygons(J,polyBW)

% Define each manually generated ROI
hPoints(1).data = [-0.194331 -0.578767;
    -0.178443 -0.637376;
    -0.038094 -0.622724;
    -0.040742 -0.540671;
    0.149920 -0.414661;
    0.306156 -0.356052;
    0.385599 -0.335538;
    0.380303 -0.171432;
    0.300860 -0.171432;
    0.290268 -0.344330;
    0.250547 -0.285720;
    0.213474 -0.294512;
    0.229362 -0.373634;
    0.165808 -0.394148;
    0.123439 -0.317955;
    0.075773 -0.332608;
    0.107550 -0.414661];

% Iteratively apply each ROI
for ii = 1:length(hPoints)
    if size(hPoints(ii).data,1) > 2
        in = inpolygon(J(:,1),J(:,2),hPoints(ii).data(:,1),hPoints(ii).data(:,2));
        in = reshape(in,size(polyBW));
        polyBW = polyBW | in;
    end
end

end






































Monday, November 14, 2016

Progress (11/14/16)

Today was able to learn MATLAB syntax, and play around with some of the tools in MATLAB. It is great for image recognition. My first task is to import an image with the train number that has some letters on it. My goal is to code in MATLAB, so that at the end, my code only recognizes the number of the train. I was so able to play around with Xcode, even if I wasn't successful at it.
I tried importing some images of the train numbers in MATLAB, as a result, I got a lot of numbers in columns, and rows. I have to further study those numbers in order to move on this process. And for next time, I have to try the images with the train number with some other letters on it because when the app is ready, and when it reads the train sign, the signs have the number of the train and direction of the train, whether it's going uptown or downtown.

Gantt Chart