Abstract

Repair shipyards sometimes need three-dimensional geometry for repairs and retrofits. However, they often create models manually from two-dimensional drawings provided by ship owners. In this case, human error leads to inaccuracies, making the process time-consuming and laborious. Therefore, there is a need for research on efficient three-dimensional hullform reconstruction from two-dimensional drawings. This study proposed a method to automatically extract points from two-dimensional lines and visualize them in three dimensions. The proposed method consists of three steps. The first step is a point extraction through image processing, which uses a starting point search algorithm to access overlapping or intersection lines and extracts the points on the lines in the drawing by searching for paths between the starting point and the end point entered by the user. The second step is the transformation of the extracted data, which transforms the points based on pixel coordinate into 3D points through coordinate transformation and scaling by utilizing the stored line data and three-dimensional coordinate information. The last step is to visualize the transformed data as a real three-dimensional model with point visualization. This study demonstrates that the proposed method can be effectively utilized by detecting two-dimensional lines and reconstructing the hullform in three dimensions.

Highlights
  • This study presents 3D hullform reconstruction based on 2D drawings.

  • This study proposes image processing techniques.

  • The proposed method is composed of point extraction, transformation, and visualization.

  • The hullform is successfully reconstructed in 3D from 2D lines drawing.

1. Introduction

In general, a repair shipyard is a place where damaged parts of a ship’s engines, machinery, electrical systems, and hull are repaired or replaced, or where routine maintenance is performed. To repair a ship, a repair shipyard performs a process called retrofit, which changes the weight of the ship, as shown in Fig. 1. If the weight of the part being retrofitted is heavy, a stability calculation may be required.

Repair processing.
Figure 1:

Repair processing.

To calculate stability, a hullform that corresponds to the ship’s outline is needed. As shown in Fig. 2, the shipyard exchanges three-dimensional models seamlessly. However, suppliers and repair shipyards are not able to be provided with three-dimensional models for security reasons and are only provided with partial drawings. As a result, the repair shipyards are forced to manually reconstruct three-dimensional models based on limited resources, and this process has required a lot of time and effort to date.

Shipyards and repair yards deliver information and restore processes.
Figure 2:

Shipyards and repair yards deliver information and restore processes.

Manual hullform reconstruction is inefficient and needs to be automated. However, detecting lines in a typical drawing like Fig. 3 presents several challenges. As shown in Fig. 3a, various information such as text, leader lines, and dimension lines are written above the line to be detected. The drawing contains various auxiliary lines and curves that are not hullform, as shown in Fig. 3b. In addition, as shown in Fig. 3c, there are cases where several lines are overlapped or crossed. Therefore, it is problematic to accurately distinguish and detect lines.

Difficulties in detecting lines.
Figure 3:

Difficulties in detecting lines.

This study developed a line detection technology that overcomes the problems depicted in Fig. 3 by utilizing image processing techniques. The developed technology converts drawings into digital images and uses a combination of various algorithms to detect the required lines and transform them into three-dimensional data. This improves the efficiency of three-dimensional modeling work in shipyards and provides a more accurate and efficient way to reconstruct and visualize the hullform.

2. Related studies

Recently, papers have been published on image processing to extract various information from two-dimensional drawings in various engineering fields. Aggarwal & Karl (2006) proposed a novel approach to the problem of detecting the location and orientation of straight lines in grayscale images. The study aims to improve the performance of traditional Hough-based straight line detection techniques by determining parameters that can be used to determine the location and orientation of straight lines in an image through the inverse Radon operator. Experimental results demonstrate that the proposed method can effectively detect straight lines in noisy images. Fontanelli et al. (2011) developed a RANSAC (RANdom SAmple Consensus)-based road line detection algorithm. The technique takes into account the dynamic motion of a high-speed frame video camera and applies the RANSAC algorithm to fit a model of road lines in the presence of outliers. Experiments on more than 4500 images under various conditions showed high lane recognition rates in most cases. Luo et al. (2023) developed an improved U-Net network-based road line detection algorithm to solve the traditional road line detection problems. The proposed method processes regions of interest using dynamic programming and extracts road line features through techniques such as group-by-group convolution, depth wise separable convolution, and atrous convolution. It applies importance weights to more effectively capture important features and regions of interest. Additionally, the loss function is optimized using a combination of focal loss and dice loss functions. Han et al. (2024) developed a rule-based method for classifying continuous lines in P&IDs (Piping and Instrumentation Diagrams). The method analyzes the shape and position relationships between P&ID objects to classify continuous lines into eight types. Experimental results demonstrate good precision and recall for classifying continuous lines in various P&IDs. Jeong et al. (2022) derived connections based on positional relationships to automatically recognize connections in P&IDs. Tesseract optical character recognition and UniverseNet (Universal-Scale Object Detection Network) were used to recognize text and objects, and Hough transform was used to recognize lines. Moon et al. (2021) developed a technique to automatically convert P&ID in image form to digital format. The technique goes through a three-step recognition process that includes removing the outline and title box of the drawing, detecting continuous lines and flow direction, and adjusting according to the line type. Experimental results demonstrate the high performance of the proposed method, achieving a precision of 96.14% and a recall of 89.59%. Kim and Kim (2023) addressed the problem of classifying functional types of lines for converting P&ID into digital P&IDs. The connection relationship between symbols and lines in the P&ID was represented as a graph, and the node classification problem was modeled based on GraphSAGE (Graph Sample AggrEgate). The experimental results demonstrated the excellent performance of line functional type classification with an accuracy of 99.53%. Chiang et al. (1998) developed a technique for vectorizing raster line images using the maximal inscribing circle. Their work preserves the width of lines by segmenting lines and tangents and implementing them in a way that allows spatial relationships to be computed efficiently. Experimental results show that it is effective for vectorizing raster line images. Kim et al. (2022) have developed a technology to automatically convert P&IDs in image format to digital format. The technology goes through a three-step process, recognizing objects in P&ID images, reconstructing the topology between recognized objects, and generating digital P&IDs. They detected and adjusted different types of lines and flow directions during the line recognition process, achieving a precision of 95.25% and recall of 87.91% in experiments. Overall, they achieved a precision of 96.65% and recall of 96.40% in symbol recognition and a precision of 90.65% and recall of 92.16% in text recognition, demonstrating the high performance of the proposed method. Kim et al. (2023) proposed a system built with a Raspberry Pi 4 that uses deep learning to detect broken stitches in real-time through image-based analysis. With an accuracy of 82.5%, the system outperforms existing methods by up to 34.6%. It is affordable, can be retrofitted to legacy sewing machines, and has proven effective in real-world factory conditions. Kim et al. (2021) proposed a method for generating 3D texture models of vessel pipes by transferring 2D textures using deep learning-based object recognition and cycle-consistent generative adversarial network (CycleGAN). This study improves AR/VR models in the shipbuilding industry by mapping realistic textures to virtual objects. The deep learning approach leverages a modified CycleGAN algorithm with direct-skip connection and double normalization to enhance object recognition, separating real-world objects from the background and mapping their textures onto 3D virtual models. Experimental results demonstrated improved texture realism and industrial applicability. Kong et al. (2022) proposed a method to automate variable indexing in ship design rule documents by utilizing deep learning for object recognition and PDF extraction. The system applies a faster region-based convolutional neural network model to recognize and extract components such as tables and figures from PDFs. The proposed method improves the accuracy of indexing variables in large rulebooks, achieving an F1 score of 0.93 for variable recognition. This system simplifies the process of reviewing ship design rules by automating variable detection and visualizing relationships between variables.

As shown in the previous research, there is currently no research on straight and curved line detection in 2D drawings used in shipyards. Studies have primarily focused on lane recognition or P&IDs. As summarized in Table 1, prior studies in related fields have not addressed the specific requirements of shipyard drawings, highlighting a research gap in this area. To overcome the limitations of previous research, this study developed a technique to detect lines of various shapes, considering the complexity of ship lines drawings. This technology can effectively detect both straight lines and curves through image processing techniques, and extracts data essential for reconstructing a three-dimensional model. It aims to efficiently restore line drawings in shipyards and create precise three-dimensional models.

Table 1:

Comparison of our study with previous research on line detection in videos and images.

Straight line detectionDetecting overlapping linesCurve detectionApplication
Aggarwal & Karl (2006)OXXNoisy images
Fontanelli et al. (2011)XXOLane detection
Luo et al. (2023)XXOLane detection
Han et al. (2024)OO (Simple crossed lines)XP&ID drawings
Jeong et al. (2022)OO (Simple crossed lines)XP&ID drawings
Moon et al. (2021)OO (Simple crossed lines)XP&ID drawings
Kim and Kim (2023)OO (Simple crossed lines)XP&ID drawings
This studyOO (Multiply crossed lines)OLines drawings, structure drawings
Straight line detectionDetecting overlapping linesCurve detectionApplication
Aggarwal & Karl (2006)OXXNoisy images
Fontanelli et al. (2011)XXOLane detection
Luo et al. (2023)XXOLane detection
Han et al. (2024)OO (Simple crossed lines)XP&ID drawings
Jeong et al. (2022)OO (Simple crossed lines)XP&ID drawings
Moon et al. (2021)OO (Simple crossed lines)XP&ID drawings
Kim and Kim (2023)OO (Simple crossed lines)XP&ID drawings
This studyOO (Multiply crossed lines)OLines drawings, structure drawings
Table 1:

Comparison of our study with previous research on line detection in videos and images.

Straight line detectionDetecting overlapping linesCurve detectionApplication
Aggarwal & Karl (2006)OXXNoisy images
Fontanelli et al. (2011)XXOLane detection
Luo et al. (2023)XXOLane detection
Han et al. (2024)OO (Simple crossed lines)XP&ID drawings
Jeong et al. (2022)OO (Simple crossed lines)XP&ID drawings
Moon et al. (2021)OO (Simple crossed lines)XP&ID drawings
Kim and Kim (2023)OO (Simple crossed lines)XP&ID drawings
This studyOO (Multiply crossed lines)OLines drawings, structure drawings
Straight line detectionDetecting overlapping linesCurve detectionApplication
Aggarwal & Karl (2006)OXXNoisy images
Fontanelli et al. (2011)XXOLane detection
Luo et al. (2023)XXOLane detection
Han et al. (2024)OO (Simple crossed lines)XP&ID drawings
Jeong et al. (2022)OO (Simple crossed lines)XP&ID drawings
Moon et al. (2021)OO (Simple crossed lines)XP&ID drawings
Kim and Kim (2023)OO (Simple crossed lines)XP&ID drawings
This studyOO (Multiply crossed lines)OLines drawings, structure drawings

3. Development of line detection

This section describes the configuration required to develop the line detection technology and details of each module.

3.1 Configuration

It is composed of the three modules: the user’s data input module, the line detection module that performs image processing, and the data output module. A system configuration diagram of the line detection technology was developed, which offers the convenience of graphic user interface (GUI)-based data input and output, as shown in Fig. 4.

System configuration for line detection technology.
Figure 4:

System configuration for line detection technology.

The input data module, which corresponds in Fig. 4a, provides the functions to input a lines drawing and information to start line detection via the GUI. It includes a function to digitally convert a lines drawing in a document to an image and to crop one image into four according to each plan. It also includes a preprocessing module for removing unnecessary outlines from the images. The line detection module, shown in Fig. 4b, performs preprocessing to remove unnecessary parts of the image. Then, based on the preprocessed image, it uses a starting point detection algorithm to define the starting points of each line and extracts two-dimensional point data of the line between the starting point and the end point using a path finding algorithm. The output data module in Fig. 4c converts the generated two-dimensional point data into three-dimensional point data and visualizes the result. The overall flowchart is shown in Fig. 5. Each module will be described in more detail later.

Overall flowchart of the system.
Figure 5:

Overall flowchart of the system.

3.2 Input data module

3.2.1 Preprocessing

Preprocessing is used to smoothly extract the data required for hullform reconstruction. The preprocessing aims to remove information that is unnecessary for hullform reconstruction, resulting in a bitmap image consisting only of lines.

The preprocessing of a drawing is shown in Fig. 6. As illustrated in Fig. 6a, high-quality PDF drawings are required. The higher the quality of the prepared drawing, the more pixels are used to represent it, resulting in more accurate outcomes. Next, as shown in Fig. 6b, the PDF drawing is converted into a PNG image using the Pillow library1. This library allows for the extraction and conversion of the drawing into PNG format, as shown in Fig. 6c. Finally, in Fig. 6d, the converted image undergoes contour removal through morphological operations, based on the method described by Moon et al. (2021), completing the preprocessing of the image.

Preprocessing process of drawings.
Figure 6:

Preprocessing process of drawings.

Even after removing the outline, there is still unnecessary information such as text, leader lines, dimension lines, etc. that interfere with line detection, as shown on the left in Fig. 7. Also, in the case of body plan, as shown in the right image in Fig. 7, the auxiliary lines such as vertical and horizontal lines are displayed together. Therefore, unnecessary text, guide lines, dimension lines, and auxiliary lines were manually removed before line detection.

Unnecessary information hindering accurate line detection.
Figure 7:

Unnecessary information hindering accurate line detection.

3.2.2 Crop image

The data to be input to the line detection module consist of the segmented lines drawing and the preprocessed image drawing. As shown in Fig. 8a, the cropped images have different coordinate systems, as each line constituting the hull represents a different aspect of the ship’s geometry. The section line is the line resulting from the intersection of the hull and a vertical plane that is perpendicular to the plane of symmetry of the hull. The buttock line is the line resulting from the intersection of the hull and a vertical plane that is parallel to the plane of symmetry of the hull. Meanwhile, the water line is the line resulting from the intersection of the hull and a horizontal plane that is parallel to the still water surface. Therefore, it is necessary to crop the images according to each coordinate system. As shown in Fig. 8b, the user inputs the coordinates of the drawing to divide the drawing into four plans. Specifically, the user inputs the pixel’s X-coordinate values based on the image coordinate system, which represents the pixel locations along the x-axis.

Process of cropping an image.
Figure 8:

Process of cropping an image.

3.3 Line detection

Line detection is a technology that extracts pixel points on the image and restores two-dimensional drawings into three-dimensional data. In this study, preprocessing is used to remove unnecessary information such as text, directional lines, and outlines. The computer graphics algorithms such as Bresenham algorithm (Kaleem et al., 2021), DBSCAN (Density-Based Spatial Clustering of Applications with Noise) (Ester et al., 1996), and a starting point search algorithm using morphology operation are used to access the information of overlapping and intersecting lines. A line is recognized through the path finding algorithm A* algorithm (Yan, 2023) and pixel processing (Moon et al., 2021). To convert the line detection data into three-dimensional data, the coordinates of the line data are moved through the transformation point entered by the user, scaling is performed to reflect the actual size of the ship, and three-dimensional data conversion is performed by inputting the plan and three-dimensional coordinate values. The following sections describe each process in detail.

3.3.1 Starting point search algorithm

After preprocessing, only lines of hullform are left, as shown in Fig. 9a. In the case of lines drawing, it is divided into four plans, and multiple lines are overlapped or crossed as shown in the right image of Fig. 9b, making it difficult to extract data accurately. Therefore, a technique that can accurately distinguish lines is required.

Complexly overlapping or intersecting lines in drawings.
Figure 9:

Complexly overlapping or intersecting lines in drawings.

This study applied the starting point search algorithm to access overlapping or intersecting lines, as shown in Fig. 10a. Starting point search algorithm is an algorithm that allows the user to input points and generates a straight line between the input points, as shown in Fig. 10. Starting point search algorithm starts by entering two points as shown in Fig. 10b. Once the two points are entered, it creates a straight line connecting the two points, as shown in Fig. 10c. The points where the generated straight line intersects the multiple lines in the drawing is used as the starting point for line detection as shown in Fig. 10d.

Flow of the start point search algorithm.
Figure 10:

Flow of the start point search algorithm.

Two common algorithms for generating straight lines into pixels with computer graphics technology are digital differential analyzer (DDA) algorithm (Dhanraj et al., 2023) and Bresenham algorithm, as shown in Fig. 11. The DDA algorithm uses floating-point values and involves multiplication as well as division, which makes it relatively slow and computationally expensive. The Bresenham algorithm, on the other hand, uses integers and is not slowed down by complex real number calculations, making it relatively efficient, with addition and subtraction as the most common operations performed.

A straight line represented by computer graphics technology.
Figure 11:

A straight line represented by computer graphics technology.

Drawings in digitally converted image format are organized into pixels. Pixels are made up of integers. Therefore, in order to select multiple pixels along a straight line based on logical criteria, Bresenham algorithm uses the equation of the straight line. The equation of a straight line passing through two common points |$( {{x_1},\ {y_1}} )$|⁠, |$( {{x_2},\ {y_2}} )$| is shown in equation (1).

(1)

where |${{{y_2} - {y_1}} / {{x_2} - {x_1}}}$| is the slope of the straight line and |${y_1}$| is also known as the value of the intercept. Bresenham algorithm works by initially selecting the axis with the largest displacement at the beginning and end of the x and y axes, and then finding the nearest integer value on the other axis in increments of 1 from beginning to end based on the axis with the largest displacement. First, assume a straight line with a slope between 0 and 1. Let us call the current coordinate |$k$| and the corresponding coordinate |$( {{x_k},\ {y_k}} )$|⁠. From the current coordinate, find the next coordinate |$k + 1$|⁠. Since the slope is between 0 and 1, the displacement is larger along the x-axis, so it is increased by 1 in the x-axis direction, and |$y$| is calculated as shown in Fig. 12 based on the breakpoint (⁠|${m_{k + 1}}$|⁠) at |${y_k}$| and |${y_k} + 1$|⁠. If the calculated breakpoint (⁠|${m_{k + 1}}$|⁠) is above the straight line, the value of |$y$| is chosen to be |${y_k}$|⁠, so that the |$k + 1$| coordinate is |$( {{x_k} + 1,\ {y_k}} )$| as shown in Fig. 12. If the calculated breakpoint (⁠|${m_{k + 2}}$|⁠) is below the straight line, the value of |$y$| is selected as |${y_k} + 1$|⁠, and the |$k + 2$| coordinate becomes |$( {{x_k} + 2,\ {y_k} + 1} )$| as shown in Fig. 12.

Calculate breakpoints based on the x-axis.
Figure 12:

Calculate breakpoints based on the x-axis.

Instead of determining the position of the straight line and the breakpoint diagrammatically, a method using computer technology is needed to determine the position of the straight line and the breakpoint. To achieve this, a discriminant expression is required to determine whether the breakpoint (⁠|${m_{k + 1}}$|⁠) is above or below the straight line. The discriminant expression is as follows.

By expressing equation (1) as an inequality, it is possible to determine whether the coordinates lie above or below a straight line.

(2)
(3)

Pass the values of equations (2) and (3) above to represent one side as zero as equations (4) and (5).

(4)
(5)

Multiply equations (4) and (5) by |${x_2} - {x_1}$| to eliminate the denominator, since the nature of the breakpoint inherently involves defining positions between pixels using real numbers like 0.5 multiplied by 2 to use integer calculations to prevent speed reduction caused by complex real number calculations, considering the nature of the breakpoint, and solve to obtain the discriminant (equations 6 and 7).

(6)
(7)

The discriminant obtained from this sequence of steps is used to determine the breakpoint |$({m_k}_{ + 1} = ({x_k} + 1,{y_k} + 0.5))$| as shown in equation (8).

(8)

By substituting the breakpoint into the discriminant, such as in equation (8), and comparing whether the result is greater or less than zero, the relative position of the breakpoint, whether it is above or below the line, can be determined. Based on this positional relationship, pixels can be selected to generate the straight line.

3.3.2 Morphology operation

When Bresenham algorithm generates a straight path between the points entered by the user, the result is shown in Fig. 13.

A straight path generated using the Bresenham algorithm.
Figure 13:

A straight path generated using the Bresenham algorithm.

These straight lines make it possible to access complex shapes of overlapping or intersecting lines in the lines drawing as the starting points for line detection processing, as shown in Fig. 14a. However, when using Bresenham algorithm, the straight line is a one pixel line, and there are cases where the straight line does not meet a black pixel, as shown in Fig. 14b.

Case where a straight line does not meet a black pixel.
Figure 14:

Case where a straight line does not meet a black pixel.

In order to solve the problem that the straight line generated by Bresenham algorithm does not meet the line consisting of black pixels, as shown in Fig. 15a, the thickness of the line is increased from one pixel to multiple pixels through the morphology dilation operation, and the problem can be solved, as shown in Fig. 15b.

Increasing thickness with morphology operation.
Figure 15:

Increasing thickness with morphology operation.

3.3.3 Clustering and extracting point data

To solve the problem that the straight lines generated by Bresenham algorithm do not meet the lines in the drawing, the thickness was increased from one pixel thickness to multi-pixel form through the morphology operation, as shown in Fig. 16a. Increasing the thickness solved the problem that the straight lines generated by Bresenham algorithm did not meet the lines made up of black pixels in the drawing. However, there are more than two green pixels at the intersection of the straight lines, as shown in Fig. 16b.

Multiple intersections points of straight-line with line.
Figure 16:

Multiple intersections points of straight-line with line.

As a starting point for processing, only one intersection point should exist for each line, and the set of intersection point data is listed without any particular distinction, as shown in Fig. 17. The intersection data need to be clustered to define which intersection is on which line in each plan and to extract a single point from the clustering of multiple intersections on each line to define the starting point for processing.

Extract points by clustering intersected point data.
Figure 17:

Extract points by clustering intersected point data.

The algorithm used to cluster the intersection data is DBSCAN. DBSCAN finds groups data points that are close together based on density in multidimensional data as shown in Fig. 18.

DBSCAN algorithm for clustering point data.
Figure 18:

DBSCAN algorithm for clustering point data.

As shown in Fig. 19, the intersection data on the same line are usually separated by 1–2 pixels based on Manhattan distance, and the minimum sample size, which represents the minimum number of points required to form a cluster, is set to 2. The parameters were determined as shown in Table 2 by examining several cases.

DBSCAN algorithm for clustering point data.
Figure 19:

DBSCAN algorithm for clustering point data.

Table 2:

Description of parameters used for the DBSCAN algorithm.

ParametersDescriptionValue
|$\varepsilon $|Distance between data2
Minimum sampleMinimum number of data that can form a cluster2
ParametersDescriptionValue
|$\varepsilon $|Distance between data2
Minimum sampleMinimum number of data that can form a cluster2
Table 2:

Description of parameters used for the DBSCAN algorithm.

ParametersDescriptionValue
|$\varepsilon $|Distance between data2
Minimum sampleMinimum number of data that can form a cluster2
ParametersDescriptionValue
|$\varepsilon $|Distance between data2
Minimum sampleMinimum number of data that can form a cluster2

For the epsilon value, it was set to 2, based on the maximum possible distance between neighboring pixels in a 3 × 3 window using Manhattan distance. This distance is the diagonal distance, which is 2 pixels, and represents the maximum distance between points that can still be considered part of the same cluster. Using this distance, we can ensure that all relevant points along the same line are grouped together even if they are slightly offset.

Regarding the minimum sample size, the assumption was made that the thinnest line in a 3 × 3 window would consist of at least two points. After testing various cases, we confirmed that when at least two points are detected within the 3 × 3 window, they can form a valid cluster. This value ensures that thin lines, which might consist of only a couple of pixels, are appropriately clustered. If a cluster is formed with a minimum of two points, with a Manhattan distance of 1–2 pixels between the intersection data, the average point is calculated over all points in the cluster, and the pixel nearest to the average point is selected as the final representative pixel for that cluster.

Utilizing DBSCAN algorithm, the data of multiple intersections of lines generated by Bresenham algorithm are clustered, and one point is extracted from each cluster to serve as the starting point for processing, as shown in Fig. 20.

Extracting the starting point for processing from data clusters.
Figure 20:

Extracting the starting point for processing from data clusters.

3.3.4 A* algorithm for line detection

The A* algorithm (Yan, 2023) (Hart et al., 1968) is one of the directed graph search algorithms that uses heuristic search techniques to find the shortest path from a given starting point to a goal point. It uses the estimated distance from the current state to the goal point to determine which path to choose during the search process and uses heuristic techniques to reduce the time complexity and memory usage of the search process. The heuristic that will be used in the A* algorithm is equation (9). |$H$| is the expected travel cost from the current state to the destination. The expected travel cost calculation method uses Euclidean distance to smooth the value comparison between each candidate node. |$G$| is the cost of traveling along the path from the starting point to the current state. |$F$| is the cost of traveling along the path from the starting point to the current state, and F is the sum of the cost of traveling to the current state (⁠|$G$|⁠) and the expected cost (⁠|$H$|⁠), and the principle of selecting the candidate node with the minimum |$F$| by comparing the candidate nodes |$F$|⁠.

(9)

To apply the A* algorithm, additional inputs to the function are required along with the heuristic technique. First, the A* algorithm requires a binary grid with the same width and height as the image for line detection. Based on the appropriate threshold, the pixels are converted into black and white pixels consisting of 1s and 0s to create a binary grid as shown in Fig. 21.

A binary grid generated with the same dimensions as the image.
Figure 21:

A binary grid generated with the same dimensions as the image.

Second, a starting point needs to be entered. The starting point is a point in the grouped data using the Bresenham algorithm and DBSCAN, as shown in Fig. 22.

Extracted point in the cluster data as the start point.
Figure 22:

Extracted point in the cluster data as the start point.

Third, the GUI asks for the midpoint and endpoints. The reason for entering additional midpoints is that if the lines intersect in a complex way, the two endpoints alone may not accurately detect the path of the line. When exploring the paths between the start and midpoint and the midpoint and endpoint defined in Fig. 22, it examines whether there are black pixels within the 3 × 3 window, selects the appropriate black pixels by considering heuristic techniques, explores the paths, and sums each path as shown in Fig. 23.

Line detection with midpoint and endpoint.
Figure 23:

Line detection with midpoint and endpoint.

In Fig. 23, when the path finding between the start point and the middle point, the middle point and the end point is completed, the detected line is displayed to the user through the GUI with red pixels. In Fig. 24, as a result of detecting the bottom water line (= 0), the line detection result can be checked through the red pixels, and the coordinates of the pixels can be checked at 100 intervals, and the coordinate system of the pixels is based on the image coordinate system.

Example of line detection on a bottom water line.
Figure 24:

Example of line detection on a bottom water line.

3.4 Output data module

Output data module provides various functions to transform and visualize line detection data, which are in a two-dimensional form, into three-dimensional data. Transformation is to convert the image coordinate system into the ship coordinate system, scaling is to apply the scale of the actual ship, and visualization is to check the results visible.

3.4.1 Transformation

As shown in Fig. 24, the coordinates of the pixels composed of line detection data are based on the image coordinate system as shown in Fig. 25. Therefore, it is necessary to convert them into the ship coordinate system to represent the ship. We move the origin coordinate of the line detection data by the specified reference point in the drawing.

Specify a reference point to express in ship coordinate system.
Figure 25:

Specify a reference point to express in ship coordinate system.

After specifying the reference point as shown in Fig. 26a, the line detection data are transformed by shifting the axis by the x or y value of the reference point and changing the sign of all y values so that the axis of the height direction from down to up in the image coordinate system is as shown in Fig. 26b.

Shift the axis by the reference point and change the sign of the y-value.
Figure 26:

Shift the axis by the reference point and change the sign of the y-value.

3.4.2 Scaling

The coordinate values of the pixels comprising the line detection data are defined by the grid area according to the width and height of the image. Therefore, to reflect the size of the actual ship, as shown in Fig. 27b, each coordinate, as shown in Fig. 27a, is scaled by multiplying it by a scale factor, which is calculated by equation (10).

(10)
Scaling with scale factor.
Figure 27:

Scaling with scale factor.

The scale factor is |$Dept{h_{\textit{ship}}}$| (the height from the ship’s base line to the upper deck position) divided by |$Dept{h_{\textit{pixel}}}$| (the difference between the maximum and minimum height of the pixel). All coordinates are scaled as shown in Fig. 27.

3.4.3 Three-dimensional restoration

After performing transformation and scaling, three-dimensional restoration is performed to restore the two-dimensional data extracted from the drawing to a three-dimensional form. First, before performing three-dimensional restoration, each plan in the drawing is characterized by having a different coordinate system such as y-z plane, as shown in Fig. 28a, and x-z plane as shown in Fig. 28b. Therefore, a value that considers the plan in the two-dimensional coordinates, as shown in Fig. 29, needs to be entered.

Plans in a drawing with different coordinate systems.
Figure 28:

Plans in a drawing with different coordinate systems.

Plans and values in drawings.
Figure 29:

Plans and values in drawings.

As shown in Fig. 29, the information of plan and value in the drawing is different for each line. Therefore, the user needs to find it manually. A GUI was developed for user input that can accept values in real number form and plan information in string form, as shown in Fig. 30. The GUI consists of a spin box to input the value and three buttons containing the information of the plan, and among the three buttons, the X button means the section line displayed on the y-z plane, the Y button means the buttock line displayed on the x-z plane, and the Z button means the water line displayed on the x-y plane, and the three-dimensional coordinate information is input by clicking the button corresponding to each line.

GUI for inputting plans and values.
Figure 30:

GUI for inputting plans and values.

Line detection maps the two-dimensional data into three dimensions. For example, for Line 1 in Fig. 31, the x value is 0, so it recognizes that the line data are a (y, z) coordinate, and substitutes 0 for all x values. Similarly, for Line 2, the z value is 3.0, so the line data can be transformed to (x, y, 3).

Three-dimensional restoration by mapping.
Figure 31:

Three-dimensional restoration by mapping.

3.4.4 Three-dimensional restoration

After performing the three-dimensional restoration, the Open3D library2 was used for point-based visualization. Open3D supports various functions such as three-dimensional reconstruction, surface alignment, and point cloud filtering, and has become a preferred choice among developers due to its high-performance data processing capabilities and high compatibility with Python. For point visualization using Open3D, the Numpy library is used to convert line detection data into a one-dimensional array. A point cloud object is then created, line detection data are assigned to the point cloud object, and the resulting point cloud is visualized. The visualized line detection data are shown in Fig. 32.

Visualization of points with Open3D.
Figure 32:

Visualization of points with Open3D.

4. Applications

Section 4 shows the results of applying the line detection technology to lines drawings using the line detection program developed in this study. The line detection program allows the user to input the preprocessed target drawings into the program, perform line detection for each drawing, restore it to three-dimensional data through three-dimensional restoration, and visualize it. The target ship is a 63K bulk carrier, which is a large commercial vessel designed to transport bulk cargo such as coal, grain, and iron ore, with a deadweight capacity of 63 000 tons. The lines drawing is shown in Fig. 33. The main dimensions of the target ship are shown in Table 3.

63K bulk carrier lines to which line detection will be applied.
Figure 33:

63K bulk carrier lines to which line detection will be applied.

Table 3:

Principal dimensions of the 63K bulk carrier.

SpecificationsValues
Length between perpendicular [m]193
Breadth [m]32.2
Depth [m]20
SpecificationsValues
Length between perpendicular [m]193
Breadth [m]32.2
Depth [m]20
Table 3:

Principal dimensions of the 63K bulk carrier.

SpecificationsValues
Length between perpendicular [m]193
Breadth [m]32.2
Depth [m]20
SpecificationsValues
Length between perpendicular [m]193
Breadth [m]32.2
Depth [m]20

4.1 Preprocessing of input data for line detection

Lines drawings have outlines. When printed or reproduced, it is important that the edges of the drawing are accurately represented so that the printing or reproduction process can proceed smoothly. To remove outlines, which are considered unnecessary for hullform reconstruction, the algorithm by Moon et al. (2021) was applied. Starting from the upper left edge of the bitmap image drawing, as shown in Fig. 34, a pixel search is performed across the width and height of the drawing. During the search, if the first black pixel with RGB channel values of 0 is detected, it is considered part of the outline. The outline is then detected by exploring all black pixels connected to it in both the vertical and horizontal directions. Finally, the morphology erosion operation and dilation operation are performed on the detected outline to remove it.

Outlines removed using pixel search.
Figure 34:

Outlines removed using pixel search.

To crop the image as shown in Fig. 8, the preprocessed image input to the GUI and the window for inputting one y and two x coordinates are created as shown in Fig. 35. When inputting two x coordinates, the image is divided into four plans by cutting the upper and lower bisection.

Crop an image via the GUI.
Figure 35:

Crop an image via the GUI.

4.2 Application of line detection

Line detection can be performed using preprocessed data as mentioned in Section 4.1. When the button to perform line detection is clicked, a window is created as shown in Fig. 36 to display the split image and input two coordinates to apply the starting point search algorithm using Bresenham algorithm and DBSCAN mentioned in Section 3.3.2.

Create a window for input to the starting point search algorithm.
Figure 36:

Create a window for input to the starting point search algorithm.

By inputting three points, a straight line connecting the three points can be created as shown in Fig. 13, and the intersection point where the straight line meets the line in the drawing is automatically defined as the starting point to perform line detection. After successfully defining the intersection point, a window is created as shown in Fig. 37, where you can enter two middle points and two end points to use the A* algorithm mentioned in Section 3.3.4.

A window generated for input to the A* algorithm.
Figure 37:

A window generated for input to the A* algorithm.

After inputting two midpoints and two endpoints for the input of the A* algorithm, the detected line is displayed as red pixels as shown in Fig. 38 by exploring the path between the start and midpoint, midpoint and endpoint, and summing each path. When the line detection is finished in Fig. 38, a window for inputting three-dimensional coordinate information is created as shown in Fig. 30. In the window for entering the three-dimensional coordinate information, you can enter the information of value and plan by referring to the drawing information.

Line detection data generated by the A* algorithm.
Figure 38:

Line detection data generated by the A* algorithm.

4.3 Application of three-dimensional restoration of line detection data

The line detection process mentioned in Section 4.2 is repeated until there are no more starting points to be processed. When there are no more starting points to process in the split image, a window is created as shown in Fig. 39, where you can enter the ship depth and pixel depth to calculate the reference point and the scale factor to perform the transformation.

A window generated for input to the three-dimensional restoration data.
Figure 39:

A window generated for input to the three-dimensional restoration data.

If the process mentioned in Sections 4.1 and 4.2 and the process of performing three-dimensional restoration in Section 4.3 are repeated for each split image until there are no more split images to process, the entire process of performing line detection is completed. After finishing the whole process of line detection, points can be visualized by clicking the visualization button mentioned in Section 3.4.4. The result of the line detection of 63K bulk carrier is visualized as shown in Fig. 40.

Point visualization of 63K bulk carrier.
Figure 40:

Point visualization of 63K bulk carrier.

5. Conclusions and future works

This study developed a three-dimensional data restoration technique from two-dimensional drawings to reconstruct the hullform. To apply image processing techniques, Pillow Library was used to convert two-dimensional drawings from documents into images. Unnecessary information was removed through preprocessing, including manual processing, to facilitate three-dimensional data restoration. To process complex drawings with many intersecting lines, straight lines were generated using the Bresenham algorithm and DBSCAN. Lines in the drawing were recognized by navigating the path between the midpoint and endpoint entered by the user through the A* algorithm, using the point where the straight line intersects with the line in the drawing as the starting point. Three-dimensional restoration for visualization was also implemented, and these features were tested on 63K bulk carrier drawings.

Future research will focus on evaluating and improving potential errors or distortions that may occur during three-dimensional restoration. For this purpose, three-dimensional point data will be generated using B-splines, and hydrostatic data will be constructed to perform a comparative analysis of errors in both real and virtual program environments. In addition, a program will be developed to enhance performance on drawings with details that are difficult for computers to interpret or on drawings with many shades not covered in this study. Techniques will be developed to increase the generality and accuracy of the program through experiments on drawings of varying complexity and distortion. To increase the generalizability of our findings, the performance of the program will also be tested on drawings of different types of ships. This will help gain a deeper understanding of how well the program performs in various situations across different ship types. Moreover, future work will address the treatment of crossing points, such as those on the water line and section line, which can differ. Methods will be developed to effectively handle these points during the restoration process, ensuring accurate interpretation in diverse scenarios.

Funding

This work is supported in part by funds from the “Leaders in INdustry-university Cooperation 3.0” Project (LINC3.0) supported by the Ministry of Education and National Research Foundation of Korea, and the Competency Development Program for Industry Specialist supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (LINC3.0: #CWNU-2024-0545, KIAT: #P0017006).

Conflict of interest statement

The authors declare no conflict of interest.

Author contributions

Jun-su Park (Methodology, Software, Writing—original draft), and Seung-Ho Ham (Conceptualization, Methodology, Software, Writing—review & editing).

Acknowledgments

We would like to thank Mr Yang-Ik Kim at CADAS for sharing their drawings and providing valuable feedback for this study.

Data availability

The data underlying this article will be shared on reasonable request to the corresponding author.

Footnotes

References

Aggarwal
 
N.
,
Karl
 
W. C.
(
2006
).
Line detection in images through regularized Hough transform
.
IEEE Transactions on Image Processing
,
15
,
582
591
.. .

Chiang
 
J. Y.
,
Tue
 
S. C.
,
Leu
 
Y. C.
(
1998
).
A new algorithm for line image vectorization
.
Pattern Recognition
,
31
(
10
),
1541
1549
.. .

Dhanraj
 
A.
,
Zope
 
B.
,
Jafri
 
S. M.
,
Buchade
 
A.
(
2023
). In
Abhay
 
Jere
, &
Shailendra
 
Gade
(Eds.),
DDA ∗ Algorithm for Path Planning
. In:
2023 International Conference on Integration of Computational Intelligent System, ICICIS 2023
.
Institute of Electrical and Electronics Engineers Inc
. .

Ester
 
M.
,
Kriegel
 
H.
,
Sander
 
J.
,
Xu
 
X.
(
1996
). In
Evangelos
 
Simoudis
, &
Jiawei
 
Han
(Eds.),
A density-based algorithm for discovering clusters in large spatial Databases with Noise
.
KDD-96: The Second International Conference on Knowledge Discovery and Data Mining
.
(pp. 226
231)
..
Association for the Advancement of Artificial Intelligence
. https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf.

Fontanelli
 
D.
,
Cappelletti
 
M.
,
Macii
 
D.
(
2011
). In
Hongjian
 
Zhang
, &
Kang
 
Lee
(Eds),
A RANSAC-based fast road line detection algorithm for high-speed wheeled vehicles
.
IEEE Instrumentation and Measurement Technology Conference
,
(pp. 186
191)
..
Institute of Electrical and Electronics Engineers Inc
. .

Han
 
S. T.
,
Moon
 
Y.
,
Lee
 
H.
,
Mun
 
D.
(
2024
).
Rule-based continuous line classification using shape and positional relationships between objects in piping and instrumentation diagram
.
Expert Systems with Applications
,
248
,
123366
.

Hart
 
P.
,
Nilsson
 
N.
,
Raphael
 
B.
(
1968
).
A formal basis for the heuristic determination of minimum cost paths
.
IEEE Transactions on Systems Science and Cybernetics
,
4
:
100
107
.. .

Jeong
 
D.
,
Park
 
K.-P.
,
Lim
 
D.
,
Chung
 
H.
(
2022
).
A Study on P&ID Connection Information Recognition Based on Object Location
.
Korean Journal of Computational Design and Engineering
,
27
,
481
491
.. .

Kaleem
 
M. K.
,
Verma
 
D.
,
Idrisi
 
M. J.
(
2021
). In
Suresh
 
Limkar
, &
Mandar
 
Khurjekar
(Eds.),
Generalization of line drawing algorithm—An effective approach to minimize the error in the existing bresenham's line drawing algorithm
. In:
2021 International Conference on Emerging Smart Computing and Informatics (ESCI)
.
(pp.516
521)
..
Institute of Electrical and Electronics Engineers Inc
. .

Kim
 
G.
, &
Kim
 
B.
(
2023
).
Classification of Functional Types of Lines in P&IDs Using a Graph Neural Network
.
IEEE Access
,
11
,
73680
73687
.. .

Kim
 
B. C.
,
Kim
 
H.
,
Moon
 
Y.
,
Lee
 
G.
,
Mun
 
D.
(
2022
).
End-to-end digitization of image format piping and instrumentation diagrams at an industrially applicable level
.
Journal of Computational Design and Engineering
,
9
,
1298
1326
.. .

Kim
 
H.
,
Lee
 
H.
,
Ahn
 
S.
,
Jung
 
W.
,
Ahn
 
S.
(
2023
).
Broken stitch detection system for industrial sewing machines using HSV color space and image processing techniques
.
Journal of Computational Design and Engineering
,
10
:
1602
1614
.. .

Kim
 
M.
,
Lee
 
K.
,
Han
 
Y.
,
Lee
 
J.
,
Nam
 
B.
(
2021
).
Generating 3D texture models of vessel pipes using 2D texture transferred by object recognition
.
Journal of Computational Design and Engineering
,
8
:
475
487
.. .

Kong
 
M.
,
Roh
 
M.
,
Kim
 
K.
,
Kim
 
J.
,
Kim
 
J.
,
Park
 
H.
(
2022
).
Variable indexing method in rule documents for ship design using extraction of portable document format elements
.
Journal of Computational Design and Engineering
,
9
:
2556
2573
.. .

Luo
 
Y.
,
Zhang
 
Y.
,
Wang
 
Z.
(
2023
). In
Tanaka
 
Shosaku
, &
Yunxiang
 
Liu
(Eds.),
Lane line detection algorithm based on improved UNet network
.
2023–8th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS)
.
(pp. 105
110)
.,
Institute of Electrical and Electronics Engineers Inc
. .

Moon
 
Y.
,
Lee
 
J.
,
Mun
 
D.
,
Lim
 
S.
(
2021
).
Deep learning-based method to recognize line objects and flow arrows from image-format piping and instrumentation diagrams for digitization
.
Applied Sciences (Switzerland)
,
11
,
10054
. .

Yan
 
Y.
(
2023
).
Research on the A Star algorithm for finding shortest path
.
Highlights in Science Engineering and Technology
,
46
,
154
161
.. .

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact [email protected]