ํ๊ต ์ ์๋๊ป์ ์์จ์ฃผํ ๊ด๋ จํ ํ๋ก์ ํธ๊ฐ ์๋ค๊ณ ํ์๋ฉด์ ๋ํ๋ฅผ ์ถ์ฒํด ์ฃผ์ ์ ์์จ์ฃผํ๊ณผ ๊ด๋ จ๋ ๋ํ์ ๋๊ฐ๊ฒ ๋์๋ค.
๋ํ ๊ด๋ จ ๋ด์ฉ์ ๋ค์๊ณผ ๊ฐ๋ค.
SW๋ฏธ๋์ฑ์ ์ฌ์ ์๊ฐ
์ํํธ์จ์ด ๋ฏธ๋์ฑ์ ์ฌ์ ์๊ฐ
๊ธ๋ก๋ฒSW·AI๊ต์ก ํ๋ก๊ทธ๋จ ๊ฐ์
โ ํ๋ก๊ทธ๋จ ์ด์ ๋ชฉ์
๊ตญ๋ด ๊ณ ๋ฑํ์๋ค์๊ฒ ๋ฏธ๊ตญ ๋ํ์ ํ์ง ๊ต์๋ฅผ ์ด๋นํ์ฌ ๋ฏธ๊ตญ ์๋น ๋ํ์๋ค ๋์์ผ๋ก ์ํํ๋ ํ๋ก๊ทธ๋จ๊ณผ ๋์ผํ ํ๋ก๊ทธ๋จ์ผ๋ก ๊ต์กํ์ฌ, ๊ธ๋ก๋ฒ ๊ธฐ์ค์์ SWํ์ฉ ๋ฅ๋ ฅ์ ํค์ฐ๊ณ ์ ํฉ๋๋ค.
๋ถ์ผ ํ๋ก๊ทธ๋จ ๊ฐ์ ์ฃผ๊ฐ์ฌ
๋ฐ์ดํฐ ์ฌ์ด์ธ์ค | ๋ฏธ๊ตญ ๋ํ์ ๋น์ ๊ณต์์ ์๋น ์ ๊ณต์๋ฅผ ๋์์ผ๋ก ๊ต์ก์ค์ธ ๋ฐ์ดํฐ ์ค์ฌ ์ปดํจํฐ ์ฌ์ด์ธ์ค(CS) ๊ธฐ์ด๊ณผ์ ์ ํ๊ตญ์ ๊ณ ๋ฑํ์๋ค์ ๋ง์ถฐ ์ฌ๊ตฌ์ฑํ ํ๋ก๊ทธ๋จ | ํด๋ ์ด๋ชฌํธ๋ฉ์ผ๋์นผ๋ฆฌ์ง ๋ฐ์ ํธ๊ต์ |
์ธ๊ณต์ง๋ฅ ์์จ์ฃผํ | ๋ผ์ฆ๋ฒ ๋ฆฌํ์ด๋ฅผ ํตํด ์ปดํจํฐ ๊ตฌ์กฐ์ ํผ์ง์ปฌ ์ปดํจํ ์ ์ดํดํ๊ณ , ํ์ด์ฌ ์ค์ฌ์ ์ธ๊ณต์ง๋ฅ์ ์ ๋ชฉํ์ฌ, ์์จ์ฃผํ ์๋์ฐจ๋ฅผ ์กฐ๋ฆฝ·์ ์ดํ ์ ์๋๋ก ๊ตฌ์ฑํ ํ๋ก๊ทธ๋จ | ๋
ธ์ค์ด์คํด๋ํ ์ด์ ๊ท ๊ต์ |
โ ๊ต์ก ์ถ์ง ๋ฐฉ๋ฒ
- ์จ๋ผ์ธ ์ ๊ท ๊ต์ก(8~10์)
- ์คํ๋ผ์ธ ์ค์ต ๊ต์ก ๋ฐ ์ง์ญ ์์ (10~11์)
- ์ฑ๋ฆฐ์ง ๋ณธ์ ๋ํ ์ด์ ๋ฐ ์์ (11์ 26์ผ(๋ชฉ)~28์ผ(ํ ) ์์ )
โ ์์ ๊ท๋ชจ
- ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ์ฅ๊ด์ : ๋ถ๋ฌธ๋ณ ๊ฐ 2์ (์ด 4์ )
- ์ ๋ณดํต์ ์ฐ์ ์งํฅ์์ฅ์ : ๋ถ๋ฌธ๋ณ ๊ฐ 2์ (์ด 4์ )
์ฌ๊ธฐ์ ๋ฐ์ดํฐ ์ฌ์ด์ธ์ค์ ์ธ๊ณต์ง๋ฅ ์์จ์ฃผํ์ด๋ผ๋ ๋ ์ฃผ์ ๊ฐ ์์๊ณ , ๊ทธ ์ค์ ์ธ๊ณต์ง๋ฅ ์์จ์ฃผํ์ ์ ํํ์ฌ ๋์๋ฆฌ ํ๋ฐฐ์ ํจ๊ป ๋ํ๋ฅผ ๋๊ฐ๋ณด์๋ค.
์ธ์ธํ ๊ณผ์ ์ ๋ค์๊ณผ ๊ฐ๋ค.
๊ต์ก ๊ฐ์
๐ฉ๊ต์ก ๋ด์ฉ
๋ผ์ฆ๋ฒ ๋ฆฌ ํ์ด๋ฅผ ํตํด ์ปดํจํฐ ๊ตฌ์กฐ์ ํผ์ง์ปฌ ์ปดํจํ ์ ์ดํดํ๊ณ , ํ์ด์ฌ ์ค์ฌ์ ์ธ๊ณต์ง๋ฅ์ ์ ๋ชฉํ์ฌ, ์์จ์ฃผํ ์๋์ฐจ๋ฅผ ์กฐ๋ฆฝ·์ ์ดํ ์ ์๋๋ก ๊ตฌ์ฑํ ํ๋ก๊ทธ๋จ
๐ฉํ๋ก๊ทธ๋จ ๊ตฌ์ฑ
(1) ๊ต์ก ๋ชฉํ
- ํ๋์คํฌ(Hard skill)๊ณผ ์ํํธ์คํฌ(Soft skill)๊ต์ก์ ๋์์ ์งํํ์ฌ, ํ์๋ค์ AI๋ถ์ผ์ ๋ํ ๊ด์ฌ๋ ํฅ์ ๋ฐ ์ง๋ก ๋ฐฉํฅ ์ค์ ๋์
(2) ์จ๋ผ์ธ ๊ต์ก (ํ ์์ผ 4์๊ฐ x 4ํ)
- CS ๊ธฐ๋ณธ, ๋ผ์ฆ๋ฒ ๋ฆฌ ํ์ด, ํ์ด์ฌ ํ๋ก๊ทธ๋๋ฐ ๋ฑ ์ํํธ ์คํฌ ๋ฐ ์์จ์ฃผํ ์๋์ฐจ ์กฐ๋ฆฝ ๋ฑ
(3) ์คํ๋ผ์ธ ๊ต์ก (~8์๊ฐ, ํ๋ณ ์ผ์ ํ์ ํ ๋ฉํ ๋ฐฉ๋ฌธ ๊ต์ก)
- ์บ ์คํค ๋์์ธ, TA ๋ฉํ ์กฐ์ธ ๋ฑ์ ํตํด ์ต์ข ํ๋ฆฌ์ ํ ์ด์ ์ค๋น
(4) ์ง์ญ ์์
- ๋ด๋น ๊ต์๋์ ์ข ํฉ ํ๊ฐ (์ฐธ์ฌ๋, ์ดํด๋, ์ ๊ทน์ฑ, ๋ฐํ ๋ฑ) ๋ฐ ๋ฏธ์ ์ํ (10~11์ ์ค)
- ์ง์ญ ์์ ์ ํต๊ณผํ 1ํ ์ฑ๋ฆฐ์ง ์ฐธ๊ฐ ์ญ ์์ ์ ํต๊ณผํ 1ํ ์ฑ๋ฆฐ์ง ์ฐธ๊ฐ (11์ 24~26์ผ)
๐ฉ๊ฐ์ฌ์ง : ์ด์ ๊ท ๊ต์ (Northeastern University CS) / ์กฐ๊ต ์ฐธ๊ฐ
๐ฉ์ธ์ด : ์ฃผ ์ธ์ด ์์ด+ ํ๊ตญ์ด
๐ฉ์ฃผ์ ์ปค๋ฆฌํ๋ผ
์ปดํจํฐ ์ดํด (๋ผ์ด๋ธ + ์จ๋ผ์ธ) : ๋ผ์ฆ๋ฒ ๋ฆฌ ํ์ด
์์จ ์ฃผํ์ฐจ ์กฐ๋ฆฝ ๋ฐ ์ค์ต (๋๋ฉด) : ์ฌ ํ์ด๋ + ๋จธ์ ๋ฌ๋
์ธ๊ณต์ง๋ฅ ํ๋ก์ ํธ (์จ๋ผ์ธ) : ์ํํธ ์คํฌ + ์ ๊ทธ๋ ์ด๋ ๋ฐ ์ต์ข ๋ฐํ
๐ฉ๊ต๊ตฌ
์์จ ์ฃผํ์ฐจ ํคํธ (ํ๋น 1๋) ์ ๊ณต
์จ๋ผ์ธ ๊ต์ก ์ธ๋ถ ๋ด์ญ
๐ฉ๊ต์ก ์๊ฐ : ์ค์ 10์๋ถํฐ ์คํ 2์ (1ํ, 4๊ต์)
- ๊ต์ก ์๊ฐ 10์ ์ ๊ฐ ์ถ์ ํ์ธ ์์
- ์ค์๊ฐ ์จ๋ผ์ธ ๊ต์ก ์ฐธ์ฌ๊ฐ ๋ถ๊ฐ๋ฅํ ๋๋ ์ ๋ ๊น์ง ์ด์๊ตญ์ผ๋ก ์ฐ๋ฝํด์ฃผ์ธ์.
๐ฉ๊ต์ก ์ธ๋ถ ์ผ์
- (ํ์ด์ฌ ๊ธฐ์ด ๊ต์ก) 8์ 27์ผ (ํ ) - ๊ต์ก ํฌ๋ง์๋ง ์๊ฐ (๋ค์ ๋ณด๊ธฐ ์ ๊ณต)
๊ฒฐ๊ณผ์ ์ผ๋ก ์ฐ๋ฆฌ์ ๊ด๋ฌธ๋ค์ ๋ค์๊ณผ ๊ฐ์๋ค.
1. ์จ๋ผ์ธ ๊ฐ์ ์๊ฐ ํ ์ตํ๊ธฐ
2. ์ค์ค๋ก ํ๋์ video log ์ ์ถ ํ ํ๊ฐ
3. 2.์์ ํฉ๊ฒฉํ ํ์ ์ถํ ์คํ๋ผ์ธ ๋ํ์ฅ์ผ๋ก ์ด๋ ํ ์ฌ์ฌ
1์ ๊ณผ์ ์ ์ด ๊ต์ก์ ์์ฒด์ ์ธ ํ๋ก๊ทธ๋จ์ด์์ผ๋ฏ๋ก, ์๋ตํ๊ณ 2์ ๊ณผ์ ์์ ํ๋ ๊ฒ๋ค ์์ฃผ๋ก ์์ฑํ๊ณ ์ ํ๋ค.
์์ฑ์ ์์ ๊ด๋ จ ํ์ผ๋ค์ ์ฌ๋ฆฐ ๊นํ ๋งํฌ๋ ๋ค์๊ณผ ๊ฐ๋ค.
https://github.com/MOSW626/hey-dobby-driver.git
GitHub - MOSW626/hey-dobby-driver: this repository for sw.ai education
this repository for sw.ai education. Contribute to MOSW626/hey-dobby-driver development by creating an account on GitHub.
github.com
์ด์ ์์ฒด์ ์ผ๋ก ์งํํ ํ๋ก๊ทธ๋๋ฐ์ ๋ํด์ ์ค๋ช ํ๊ณ ์ ํ๋ค.
1. ๋ณธ๋์ ํ๋ก๊ทธ๋๋ฐ
: ์๋ ์๋ ํ๋ก๊ทธ๋๋ฐ์ ๋ฐ๋ฅ์ ์ธ์ํ๊ณ , stop sign์ธ์ง ์๋์ง๋ฅผ ํ๋ณํ๋ ์ฝ๋์ด๋ค.
# USAGE
# python stop_detector.py
# import the necessary packages
from keras.preprocessing.image import img_to_array
from keras.models import load_model
import tensorflow as tf
from imutils.video import VideoStream
from threading import Thread
import numpy as np
import imutils
import time
import cv2
import os
# define the paths to the Not STOP-NoT-STOP deep learning model
MODEL_PATH = "./models/stop_not_stop.model"
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
# initialize the total number of frames that *consecutively* contain
# stop sign along with threshold required to trigger the sign alarm
TOTAL_CONSEC = 0
TOTAL_THRESH = 20
# initialize is the sign alarm has been triggered
STOP = False
# load the model
print("[INFO] loading model...")
model = load_model(MODEL_PATH)
# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream...")
# vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start()
vs = cv2.VideoCapture(-1)
vs.set(cv2.CAP_PROP_FRAME_WIDTH, 320)
vs.set(cv2.CAP_PROP_FRAME_HEIGHT, 240)
time.sleep(2.0)
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 320 pixels
ret, frame = vs.read()
frame = imutils.resize(frame, width=320)
# prepare the image to be classified by our deep learning network
image = frame[60:120, 240:320]
image = cv2.resize(image , (28, 28))
image = image.astype("float") / 255.0
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
# classify the input image and initialize the label and
# probability of the prediction
(notStop, stop) = model.predict(image)[0]
label = "Not Stop"
proba = notStop
# check to see if stop sign was detected using our convolutional
# neural network
if stop > notStop:
# update the label and prediction probability
label = "Stop"
proba = stop
# increment the total number of consecutive frames that
# contain stop
TOTAL_CONSEC += 1
# check to see if we should raise the stop sign alarm
if not STOP and TOTAL_CONSEC >= TOTAL_THRESH:
# indicate that stop has been found
STOP = True
print("Stop Sign...")
# otherwise, reset the total number of consecutive frames and the
# stop sign alarm
else:
TOTAL_CONSEC = 0
STOP = False
# build the label and draw it on the frame
label = "{}: {:.2f}%".format(label, proba * 100)
frame = cv2.putText(frame, label, (10, 25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
frame = cv2.rectangle(frame, (240, 60),(320,120), (0,0,255), 2)
# show the output frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
print("[INFO] cleaning up...")
cv2.destroyAllWindows()
vs.release()
์ด๋ฌํ ๋ฐฉ์์ ์ฝ๋๋ฅผ ๋ฐ๊พธ๋ ๊ณผ์ ์ ์งํํ๋ค.
2. ํ๋ก์ ํธ ์งํ
์ด๋ค ๋ฐฉ์์ผ๋ก ์งํ์ ํ๋ฉด ์ข์ ์ง์ ๋ํด์ ํ ์๋ฅผ ์งํํ๊ณ , ๊ทธ ๊ฒฐ๊ณผ stop sign์ ์ธ์ํ๋ ๋ฐ์๋ ์ฑ๊ณตํ๊ธฐ ๋๋ฌธ์ ๋ค๋ฅธ ํ์งํ๋ค์ ์ธ์ํ์ฌ ์ฐ๋ฆฌ๊ฐ ๋ง๋ ๊ณต๊ฐ์์ ์ฃผํ์ ์งํ์ ํ๋ ๋ชฉํ๋ฅผ ์ธ์ ๋ค.
์ด ๊ณผ์ ์ ํตํด์ ์์์ label๊ณผ proba๊ฐ 1๊ฐ์ง ๋ฟ์ด์๋ ๊ฒ์ ์ฌ๋ฌ๊ฐ์ ๋ผ๋ฒจ์ ๋ถ์ฌ์ฃผ์๊ณ , ๋์ผํ๊ฒ ํ์ต์ํค๊ธฐ ์ํด์ test_networkํ์ผ๊ณผ train_network ํ์ผ์ ์์ ํ์๋ค.
๋ค์ ์ฝ๋๋ ๊ทธ๋ ๊ฒ ์์ ํ sign_detector ํ์ผ์ด๋ค.
# USAGE
# python stop_detector.py
# import the necessary packages
from keras.preprocessing.image import img_to_array
from keras.models import load_model
from imutils.video import VideoStream
from threading import Thread
import numpy as np
import imutils
import time
import cv2
import os
# define the paths to the Not Santa Keras deep learning model and
# audio file
MODEL_PATH = "traffic_sign.model"
#MODEL_PATH = "aa.tflite"
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# initialize the total number of frames that *consecutively* contain
# stop sign along with threshold required to trigger the sign alarm
TOTAL_CONSEC = 0
TOTAL_THRESH = 20
# initialize is the sign alarm has been triggered
STOP = False
# load the model
print("[INFO] loading model...")
model = load_model(MODEL_PATH)
# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream...")
vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 320 pixels
frame = vs.read()
frame = imutils.resize(frame, width=320)
# prepare the image to be classified by our deep learning network
image = frame[60:120, 240:320]
image = cv2.resize(image , (28, 28))
image = image.astype("float") / 255.0
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
# classify the input image and initialize the label and
# probability of the prediction
(uturn, parking, speed60, speed30, turnleft, stop, road) = model.predict(image)[0]
label = "Not Stop"
proba = notStop
# check to see if stop sign was detected using our convolutional
# neural network
if stop > notStop:
# update the label and prediction probability
label = "Stop"
proba = stop
# increment the total number of consecutive frames that
# contain stop
TOTAL_CONSEC += 1
# check to see if we should raise the stop sign alarm
if not STOP and TOTAL_CONSEC >= TOTAL_THRESH:
# indicate that stop has been found
STOP = True
print("Stop Sign...")
# otherwise, reset the total number of consecutive frames and the
# stop sign alarm
else:
TOTAL_CONSEC = 0
STOP = False
# build the label and draw it on the frame
label = "{}: {:.2f}%".format(label, proba * 100)
frame = cv2.putText(frame, label, (10, 25),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
frame = cv2.rectangle(frame, (240, 60), (320, 120), (0,0,255), 2)
# show the output frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
print("[INFO] cleaning up...")
cv2.destroyAllWindows()
vs.stop()
๋ํ trainํ๋ ๊ณผ์ ์์ ๋ค์๊ณผ ๊ฐ์ด ๋ผ๋ฒจ๋ง์ ์งํํ๋ ์ฝ๋๋ฅผ ์์ฑํ์๋ค.
# USAGE
# python train_network.py --dataset images --model stop_not_stop.model
# set the matplotlib backend so figures can be saved in the background
import matplotlib
#matplotlib.use("Agg")
# import the necessary packages
from tensorflow import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import img_to_array
from keras.utils import to_categorical
from lenet import LeNet
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import random
import cv2
import os
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
help="path to input dataset")
ap.add_argument("-m", "--model", required=True,
help="path to output model")
ap.add_argument("-p", "--plot", type=str, default="plot.png",
help="path to output loss/accuracy plot")
args = vars(ap.parse_args())
# initialize the number of epochs to train for, initia learning rate,
# and batch size
EPOCHS = 25 # ๋ฐ๋ณต ํ์
INIT_LR = 1e-3 # ๋ฌ๋ ๋ ์
BS = 32 # ํ๊บผ๋ฒ์ ํ์ต ๊ฐ์
# initialize the data and labels
print("[INFO] loading images...")
data = []
labels = []
# grab the image paths and randomly shuffle them
imagePaths = sorted(list(paths.list_images(args["dataset"])))
random.seed(42)
random.shuffle(imagePaths)
# loop over the input images
for imagePath in imagePaths:
# load the image, pre-process it, and store it in the data list
image = cv2.imread(imagePath)
image = cv2.resize(image, (28, 28))
image = img_to_array(image)
data.append(image)
# extract the class label from the image path and update the
# labels list
label = imagePath.split(os.path.sep)[-2]
if label == "uturn":
label = 6
elif label == "parking":
label = 5
elif label == "speed_60":
label = 4
elif label == "speed_30":
label = 3
elif label == "turnleft":
label = 2
elif label == "stop":
label = 1
else:
label = 0
labels.append(label)
# scale the raw pixel intensities to the range [0, 1]
data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
# partition the data into training and testing splits using 75% of
# the data for training and the remaining 25% for testing
(trainX, testX, trainY, testY) = train_test_split(data,
labels, test_size=0.25, random_state=42)
# convert the labels from integers to vectors
trainY = to_categorical(trainY, num_classes=7) # class : ํ
์คํธ ๋ฐ์ดํฐ ์ > ๋๋ฆฌ๋ฉด ๋จ
testY = to_categorical(testY, num_classes=7)
# construct the image generator for data augmentation
aug = ImageDataGenerator(rotation_range=30, width_shift_range=0.1,
height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
horizontal_flip=True, fill_mode="nearest")
# initialize the model
print("[INFO] compiling model...")
model = LeNet.build(width=28, height=28, depth=3, classes=7) # depth์ classes ๋ฅผ ๊ฑด๋๋ ค๋ณด์
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
# train the network
print("[INFO] training network...")
H = model.fit_generator(aug.flow(trainX, trainY, batch_size=BS),
validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS,
epochs=EPOCHS, verbose=1)
# save the model to disk
print("[INFO] serializing network...")
model.save(args["model"])
# plot the training loss and accuracy
plt.style.use("ggplot")
plt.figure()
N = EPOCHS
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy on Traffic sign")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
#plt.savefig(args["plot"])
plt.show()
์ด ๊ณผ์ ์ ํตํด์ ์๋์ฐจ๋ฅผ ํ๋ก๊ทธ๋๋ฐํ๊ณ , ์ดํ ๊ณต๊ฐ์ ๋ง๋ค์ด ์์จ์ฃผํ์ ๋๋ ค๋ณด์๋ค.
3. Video Log
์ ์ถํด์ผ ํ๋ ํ์ผ์ ๋ค์ ๋งํฌ์ ์ฌ๋ ค๋์๋ค.
์ด ๊ณผ์ ์ ๋์ผ๋ก ๋ชจ๋ ํ๋ก์ ํธ๊ฐ ์ผ๋จ ๋์ด ๋ฌ๊ณ , ํ๊ฐ๋ฅผ ๊ธฐ๋ค๋ฆฌ๊ฒ ๋์๋ค.
4. ๋ง๋ฌด๋ฆฌ
ํ์ง๋ง ์์ฝ๊ฒ๋ ๋จ์ด์ก๋ค๋ ๋ฌธ์๊ฐ ์๊ณ , ์ฐ๋ฆฌ ํ์ ๋ ๋ค ์ผ์ ์ด ์๊ฒจ ์ผ๋ณธ ์ฌํ๊ณผ ์ฑ๊ฐํฌ๋ฅด ์ฌํ์ ๋ ๋ ์์ ์ด์๋ค.
๊ทธ๋ฐ๋ฐ ๊ฐ์๊ธฐ ์ถ๊ฐ ํฉ๊ฒฉ์ด๋ผ๋ ๋ฌธ์๊ฐ ์๋ค.
์์ฝ๊ฒ๋ ์ผ์ ์ด ์์ด์ ์ฐธ์ฌํ์ง ๋ชปํ์์ง๋ง, ๊ต์ฅํ ๋ง์ ๊ฒ์ ๋ฐฐ์ฐ๊ณ ์ ์๋ฏธํ ์๊ฐ์ด์๋ค๊ณ ์๊ฐํ๋ค.