Mechanical engineering and machine science
Purpose of research. The aim of the study is a comparative analysis of the stress-strain states of welded joints during welding of saddle branches to existing gas pipelines under normal conditions using standard technology and low temperatures using the proposed technology.
Methods. To achieve the goal, a numerical implementation of the mathematical model of the thermoelastic state by the finite element method in the Python programming environment using the Fenics computing package was performed. The computational grid was built using the GMSH program. Visualization of the obtained results was implemented using the Paraview package.
Results. On the basis of computational experiments of welding a saddle branch to an existing gas pipeline at permissible and low ambient temperatures, the dynamics of the temperature fields, stresses and strains in the heataffected zone are almost identical.
Conclusion. The calculation results showed that when welding saddle branches to polyethylene pipes of existing gas pipelines at low temperatures using the proposed technology, the main physical and mechanical processes will proceed as in welding under normal conditions and provide the necessary strength.
Computer science, computer engineering and IT managment
Purpose of research. The purpose of this research is to develop an ontology structure as the basis of a database/knowledge base for selecting effective metaheuristic algorithms for solving the problem of load distribution in heterogeneous distributed dynamic computing environments, taking into account the overhead of data transmission over the network.
Methods. The main scientific methods used in this study are domain analysis, methods for constructing subject ontologies, numerical optimization methods and computer modeling.
Since the literature does not present resource allocation planning models that would take into account geographic distribution, the presence of intermediate data transmission routes, the dynamics of topologies and load, as well as system heterogeneity in terms of criteria for assessing the quality of load distribution, this article proposes a new model that takes into account these features. The complexity of solving a planning problem becomes one of the variable parameters, which has a significant impact on the planning result: with a decrease in the complexity of calculations, the result deteriorates accordingly. Therefore, a greedy strategy is proposed as a solution method: from the optimization methods to be considered, select the least labor-intensive one that would allow obtaining the best result in the allotted time. Test runs of simulated annealing algorithms demonstrate different effectiveness under different initial conditions of the problem; therefore, it is advisable for selected classes of problems to choose algorithms that are effective in terms of solution quality and labor intensity.
Results. The result of the study is the structure of the ontology of effective algorithms. Also, the results are instances of simulated annealing algorithms and tasks included in the ontology, related by the “efficiency” relation.
Conclusion. This article proposes the structure of an ontology of effective optimization algorithms and an approach to solving the problem of distributing the computational load, taking into account the complexity of the distribution procedure through the “greedy” selection of the most effective optimization algorithms.
Purpose of reseach. Development of a forecast model of energy consumption and assessment of factors influencing its consumption. The obtained forecast estimates of energy consumption will improve the quality and efficiency of management decisions at all levels of administrative management.
Methods. The article presents an analytical review of the existing methods of cognitive modelling and forecasting of electric power consumption, the description of the software implementation of the information-computing system that allows to make a forecast of electric power consumption by the population of the administrative-territorial formation. The approach to the description of factors of electric power consumption by both population and various branches of national economy, as well as organisations engaged in rendering various services has been proposed. Special software has been developed, which allows to obtain model results of electric power consumption in an automated mode, to carry out factor analysis of power consumption. The experimental verification of the work of the programme of cognitive modelling and forecasting of electric power consumption by the population of Lgovsky district of Kursk region is given. The developed software also makes it possible to evaluate the adequacy of the obtained results and promptly adjust the model parameters.
Results. As a result of the research a fuzzy cognitive map of energy consumption for a municipal entity was developed. The concepts of the subject area describing the influence of various groups of factors on the level of electric energy consumption were identified. Forecast estimates of electricity consumption were obtained, which were based on the data for the retrospective period. Adequacy indicators based on the calculation of statistical criteria are determined for the obtained estimates.
Conclusion. The results of the study have shown that the combination of cognitive and statistical methods allows to achieve an adequate solution when solving the problem of energy consumption forecasting.
Purpose of reseach. In the tasks of authenticating groups of messages encoded in the mode of chaining blocks, there is a need for the formation and processing of specific tree-like structures. The contents of such structures, in addition to information about the placement of data in the internal memory of the calculators, describes the relative location of messages in the data stream between subscribers of a peer-to-peer network. This information is necessary to isolate a structured set from the entire message stream to the receiver, for which its source is uniquely determined. Using approaches to segmentation of tree structures allows you to parallelize the processes of adding elements to it and searching for areas corresponding to an authentication error.
Methods. The division of the tree structure into areas subject to modification and areas for analysis is based on a metric dynamically formed from message authentication codes – the position of a specific message in a structured set of messages transmitted from the source to the receiver. The value of this metric determines the distance from the root of the tree, which defines the boundary between the two named areas
Results. By isolating the modified and analyzed sections of the tree structure, races of processes implementing independent algorithms for working with it are excluded. The possibility of detecting authentication errors before receiving the last message in a structured set of messages is shown. As a result, there is no need to transmit those group messages that were supposed to be sent after the error was detected. Formulas for estimating the average transmission time of multiple messages with sequential and parallel implementation of procedures for the formation and processing of a tree structure containing descriptors of incoming messages to the receiver are given.
Conclusion. The paper shows that the parallel implementation of algorithms for adding elements to the tree structure and the algorithm for searching for areas corresponding to an error reduces the average transmission time of a group of messages by 5-12% compared with the sequential implementation of these algorithms. This reduces the load on the communication channel for the target class of systems using block coupling encoding for authentication.
Purpose of research is to develop a new high-speed method for searching trappin sets in graph codes, ensuring the completeness of the search.
Methods. There are two approaches to finding trappin sets. The first, based on the Monte Carlo method with a biased probability estimation using Importance Sampling, involves the use of a decoder. The advantage of this approach is its high performance. The disadvantages are the dependence on decoder parameters and channel characteristics and the finite probability of missing trappin sets. The second approach is based on the use of linear programming methods. The advantage of this approach is the completeness of the resulting list of trappin sets, due to its independence from the decoder parameters and channel characteristics. The disadvantage of this approach is its high computational complexity. In the article, within the framework of the second approach, a new method for searching trappin sets with less computational complexity is proposed. The method involves solving a mixed integer linear programming problem using an a priori list of code vertices participating in the shortest cycles in the code graph.
Results. Using the proposed method, a search for trappin sets was performed for several low-density codes. For this purpose, the mathematical linear programming package IBM CPLEX version 12.8 was used, which was run on 32 threads of a 16-core AMD Ryzen 3950X processor with 32GB of RAM (DDR4). In the Margulis code (2640, 1320), using the proposed method, the trappin set TS(6,6) was found in a time of 0.53 s. The speedup provided by the method proposed in the paper compared to the Velazquez-Subramani method is 8252.415 times. Thanks to the high speed and completeness of the search, trappin sets were found for the first time TS(62,16) and TS(52,14) in the Margulis code (4896, 2474 ).
Conclusion. The paper proposes a new method for searching trapping sets by solving a mixed integer linear programming problem with an a priori list of code. The method is fast and provides completeness of the search.
Purpose of reseach is to conduct a comparative analysis of texts generated by an artificial intelligence system in the process of processing source text in natural language and counter texts that are the result of human understanding of the source literary text.
Methods. To achieve the goal and objectives of the study, the author used an experimental technique to conduct a comparative analysis of the denotational structures of counter texts. Participants in the experiment (7 4th year students of the Faculty of Additional Education “Translator in the field of professional communications”, 3 associate professors of the Department of Foreign Languages of South- West State University) assessed the success of recreating the semantic structure of the text of 10 counter texts of different nature - generated by AI and humans.
Results. The results of the experiment indicate that the completeness of the semantic content of the generated text does not depend on the structure of the source text. Modern methods of semantic text processing by an AI system make it possible to obtain the output of full-fledged text works created taking into account the rules and norms of natural language.
AI systems successfully recreate the denotational structure of the text and reconstruct the syntactic structure.
Conclusion. Access to large databases allows you to train a neural network on large text corpora, which results in an increase in the accuracy and variability of the lexical units and constructs used. The accuracy of conveying the semantic content of the text varies. It depends on the degree of text compression - the higher it is, the less accuracy can be, because the neural network is unable to classify denotational connections for relevance to the underlying meaning. The degree of accuracy in conveying semantic content is determined by the success / failure of understanding the deep hidden meaning, which is determined by the understanding of the linguistic and extralinguistic context. The ability to recognize the situation model recreated in the source text is the key to understanding the hidden meaning. The AI system can recreate the surface denotational structure of the text quite correctly and accurately, but is not able to construct a model of the situation at this stage of development.
Purpose of reseach. The paper considers the process of selective pre-destruction of interphase boundaries in iron ores by using magnetic-pulse treatment. When analyzing the stress-strain state and viscous fracture, the relative similarity of the fracture criteria in the main minerals of iron ores due to magnetically-striction deformation of magnetite grains.
Methods.It has been established that the strength and toughness of the destruction of magnetite exceeds the analogous properties of calcite in the composition of skarn iron ores, the strength and toughness of quartz fracture exceeds the analogous properties of magnetite. A difference in the character of the destruction of skarn ores and ferruginous quartzites. The criterion for estimating the degree of softening of interphase boundaries in iron ores due to the magnetic-impulse action based on the probabilistic approach is formulated.
Results.A theoretical estimate is made of the degree of selective softening of iron ores under magnetic-pulse treatment, taking into account the strength and magnetostriction properties of magnetite. The results of experiments on nanoindentation of interphase boundaries before and after magnetic-pulse processing are presented.
Conclusion. By analyzing the lengths of developing microcracks under the influence of a nanoindenter, the possibility of reducing the fracture toughness after a magnetic pulse treatment of iron ore.
Purpose of reseach. The purpose of this article is to develop a scientific and methodological approach to assessing the timeliness of decision-making in the operational management of car traffic routing in an urban agglomeration. Timeliness is manifested in reducing the time to prepare a decision to change the route of a car delivering goods in conditions of a possible (predicted) increase in traffic congestion at intersections and streets crossing the established route in an urban agglomeration.
Methods. The presented approach is based on the basic principles of management theory in organizational systems, the theory of rational consumer behavior, mathematical statistics, and simulation modeling.
Results. A variant of the generalized scheme of the operational control cycle for the routing of a car in an urban agglomeration has been developed. A sequence of estimation of decision-making time for operational control of vehicle traffic routing is proposed, and the individual stages of the evaluation sequence implement the main functions of the developed version of the operational control cycle. Experimental dependences of the time of analysis of route parameters during operational management on the quantity and quality of the routes under consideration in the AnyLogic 8.4.0 simulation environment are obtained.
Conclusion. The article considers an approach to assessing the timeliness of decision-making in the operational management of the route of a car delivering goods in an urban agglomeration. To minimize the time for making an operational decision, it is proposed to predict the intensity of traffic congestion and assess the quality of the route. The approach allows, based on forecasting the state of the road situation at the intersection points of the route and streets with a high probability of traffic congestion, as well as, taking into account the number and quality of routes, to increase the timeliness of making a decision to change the route of a car delivering goods to an urban agglomeration.
Рurpose of research. Increasing the efficiency of cargo port business processes through the use of risk management technology based on the application of a cognitive modeling.
Methods. A cognitive modeling of cargo port risk management is proposed, based on a comprehensive step-by-step application of the concept of multi-level goal setting, which involves a thorough elaboration of the cargo port's goals, as well as indicators for assessing the achievability of goals by developing a balanced scorecard (BSS) and constructing a logicalprobabilistic (LP) model; a logical-ontological model developed on the basis of the connections established by the LP model; a simulation model used to check recommendations for adjusting the elements of the system under consideration, developed on the basis of queries to the ontological model, in order to select the most acceptable options for recommendations or combinations thereof and formulate management decisions based on them.
Results. Based on the formulated purpose of the study and the assigned tasks, a concept of cognitive modeling was developed, which involves the use of knowledge about the connections between risks, goals, indicators for assessing port activities, as well as clarifying coefficients and the nature of their influence on each other in order to develop recommendations for managing the risks of a cargo port at based on queries to the ontological model. The simulation model within the framework of the proposed conceptual approach allows us to develop management decisions on adjusting the operational components of the system in order to prevent risk situations in the long term (at the tactical and strategic levels) taking into account the influence of external factors. Cognitive modeling is based in this work on the integration of logical-probabilistic, logical-ontological and simulation modeling.
Conclusion. As a result of the implementation of the set goals and objectives, a cognitive model of cargo port risk management was proposed. This model combines various types of modeling and takes into account different levels of management. As a result of experiments with a simulation model, the most effective recommendations generated based on a query to the ontological model are selected as management decisions.
ISSN 2686-6757 (Online)