Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
We deal with the general issue of handling statistical data in archaeology for the purpose of deducing sound, justified conclusions. The employment of various quantitative and statistical methods in archaeological practice has existed... more
We deal with the general issue of handling statistical data in archaeology for the purpose of deducing sound, justified conclusions. The employment of various quantitative and statistical methods in archaeological practice has existed from its beginning as a systematic discipline in the 19th century (Drower 1995). Since this early period, the focus of archaeological research has developed and shifted several times. The last phase in this process, especially common in recent decades, is the proliferation of collaboration with various branches of the exact and natural sciences. Many new avenues of inquiry have been inaugurated, and a wealth of information has become available to archaeologists. In our view, the plethora of newly obtained data requires a careful reexamination of existing statistical approaches and a restatement of the desired focus of some archaeological investigations. We are delighted to dedicate this article to Israel Finkelstein, our teacher, adviser, colleague, and friend, who is one of the father figures of this ongoing scientific revolution in archaeology (e.g., Finkelstein and Piasetzky 2010, Finkelstein et al. 2012, 2015), and wish him many more fruitful years of research.
The problem of finding a prototype for typewritten or handwritten characters belongs to a family of "shape prior" estimation problems. In epigraphic research, such priors are derived manually, and constitute the building blocks of... more
The problem of finding a prototype for typewritten or handwritten characters belongs to a family of "shape prior" estimation problems. In epigraphic research, such priors are derived manually, and constitute the building blocks of "paleographic tables". Suggestions for automatic solutions to the estimation problem are rare in both the Computer Vision and the OCR/Handwriting Text Recognition communities. We review some of the existing approaches, and propose a new robust scheme, suitable for the challenges of degraded historical documents. This fast and easy to implement method is employed for ancient Hebrew inscriptions dated to the First Temple period.
Our research team enjoyed the privilege of collaborating with Benjamin Sass over a period of several years. We are happy to dedicate this article to him and wish to express our gratitude for what has been both a prodigious and enjoyable... more
Our research team enjoyed the privilege of collaborating with Benjamin Sass over a period of several years. We are happy to dedicate this article to him and wish to express our gratitude for what has been both a prodigious and enjoyable experience. The purpose of our joint endeavor has been the introduction of modern techniques from computer science and physics to the realm of Iron Age epigraphy. One of the most important issues addressed during our cooperation was the topic of facsimile creation. Facsimile creation is a necessary preliminary step in the process of deciphering and analyzing ancient inscriptions. Several manual facsimile construction techniques are currently in use: drawing upon collation of the artifact; outlining on transparent paper overlaid on a photograph of the inscription; and computer-aided depiction via software such as Adobe Photoshop, Adobe Illustrator, Gimp or Inkscape (see Summary section below for software web links). Despite their importance for the field of epigraphy, little attention has thus far been devoted to the methodology of facsimile creation (though the recent comprehensive treatment by Parker and Rollston 2016). Recent decades have seen rapid development and consolidation of various computerized image processing algorithms. Among the most basic and popular tasks in this field is the creation of a black-and-white version of a given image, denoted as image binarization (see Fig.1a–b). Often, such a binarized image is used as a first step for further image processing missions, such as Optical Character Recognition (OCR), texts digitization and text analysis tasks. An algorithmic creation of binarizations can therefore be seen as another method of facsimile creation. Furthermore, a relatively new sub-domain of image processing, Historical Imaging and Processing (HIP), specializes in handling antique documents of different types, periods and origins. Accordingly, binarization algorithms stemming from HIP are even more suitable for archaeological purposes.
Chan-Vese is an important and well-established segmentation method. However, it tends to be challenging to implement, including issues such as initialization problems and establishing the values of several free parameters. The paper... more
Chan-Vese is an important and well-established segmentation method. However, it tends to be challenging to implement, including issues such as initialization problems and establishing the values of several free parameters. The paper presents a detailed analysis of Chan-Vese framework. It establishes a relation between the Otsu binarization method and the fidelity terms of Chan-Vese energy functional, allowing for intelligent initialization of the scheme. An alternative, fast, and parameter-free morphological segmentation technique is also suggested. Our experiments indicate the soundness of the proposed algorithm.
This article discusses the quality assessment of binary images. The customary, ground truth based methodology, used in the literature is shown to be problematic due to its subjective nature. Several previously suggested alternatives are... more
This article discusses the quality assessment of binary images. The customary, ground truth based methodology, used in the literature is shown to be problematic due to its subjective nature. Several previously suggested alternatives are surveyed and are also found to be inadequate in certain scenarios. A new approach, quantifying the adherence of a binarization to its document image is proposed and tested using six different measures of accuracy. The measures are evaluated experimentally based on datasets from DIBCO and H-DIBCO competitions, with respect to different kinds of binarization degradations.
The discipline of First Temple Period epigraphy (the study of writing) relies heavily on manually-drawn facsimiles (black and white images) of ancient inscriptions. This practice may unintentionally mix up documentation and... more
The discipline of First Temple Period epigraphy (the study of writing) relies heavily on manually-drawn facsimiles (black and white images) of ancient inscriptions. This practice may unintentionally mix up documentation and interpretation. The article proposes a new method for evaluating the quality of the facsimile. It is based on a measure, comparing the image of the inscription to the registered facsimile. Some empirical results, supporting the methodology, are presented. The technique is also relevant to quality evaluation of other types of facsimiles and binarization in general.
The relationship between the expansion of literacy in Judah and composition of biblical texts has attracted scholarly attention for over a century. Information on this issue can be deduced from Hebrew inscriptions from the final phase of... more
The relationship between the expansion of literacy in Judah and composition of biblical texts has attracted scholarly attention for over a century. Information on this issue can be deduced from Hebrew inscriptions from the final phase of the first Temple period. We report our investigation of 16 inscriptions from the Judahite desert fortress of Arad, dated ca. 600 BCE—the eve of Nebuchadnezzar’s destruction of Jerusalem. The inquiry is based on new methods for image processing and document analysis, as well as machine learning algorithms. These techniques enable identification of the minimal number of authors in a given group of inscriptions. Our algorithmic analysis, complemented by the textual information, reveals a minimum of six authors within the examined inscriptions. The results indicate that in this remote fort literacy had spread throughout the military hierarchy, down to the quartermaster and probably even below that rank. This implies that an educational infrastructure that could support the composition of literary texts in Judah already existed before the destruction of the first Temple. A similar level of literacy in this area is attested again only 400 y later, ca. 200 BCE.
A binarization of challenging historical inscription is improved by means of sparse methods. The approximation is based on a binary dictionary learned by k-medians and k-medoids algorithms from a clear source. Some preliminary results... more
A binarization of challenging historical inscription is improved by means of sparse methods. The approximation is based on a binary dictionary learned by k-medians and k-medoids algorithms from a clear source. Some preliminary results show superiority to the existing binarization with respect to fine features such as strokes continuity, deviations from a straight line, edge noise and the presence of stains. The k-medians dictionary-learning scheme shows a robust behavior when initial patches database is reduced.
Research Interests:
Research Interests:
Preconditioners for hyperbolic systems are numerical artifacts to accelerate the convergence to a steady state. In addition, the preconditioner should also be included in the artificial viscosity or upwinding terms to improve the accuracy... more
Preconditioners for hyperbolic systems are numerical artifacts to accelerate the convergence to a steady state. In addition, the preconditioner should also be included in the artificial viscosity or upwinding terms to improve the accuracy of the steady state solution. For time dependent problems we use a dual time stepping approach. The preconditioner affects the convergence rate and the accuracy of the subiterations within each physical time step. We consider two types of local preconditioners: Jacobi and low speed preconditioning. We can express the algorithm in several sets of variables while using only the conservation variables for the flux terms. We compare the effect of these various variable sets on the efficiency and accuracy of the scheme.
Research Interests:
We consider the steady state equations for a compressible fluid. For low speed flow the system is stiff since the ratio of the convective speed to the speed of sound is very small. To overcome this difficulty we alter the time dependency... more
We consider the steady state equations for a compressible fluid. For low speed flow the system is stiff since the ratio of the convective speed to the speed of sound is very small. To overcome this difficulty we alter the time dependency of the equations while retaining the same steady state operator. In order to achieve high numerical resolution we also alter the artificial dissipation (or Roe matrix) of the numerical scheme. The definition of preconditioners and artificial dissipation terms can be formulated conveniently by using other sets of dependent variables rather than the conservation variables. The effects of different preconditioners, artificial dissipation and grid density on accuracy and convergence to the steady state of the numerical solutions are presented in detail. The numerical results obtained for inviscid and viscous two-and three-dimensional flows over external aerodynamic bodies indicate that efficient multigrid computations of flows with very low Mach numbers are now possible. 0 1997 Elsevier Science Ltd.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage... more
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
The behavior of a sound in a jet was investigated. It is verified that the far-field acoustic power increased with flow velocity for the lower and medium frequency range. Experimentally, an attenuation at higher frequencies is also... more
The behavior of a sound in a jet was investigated. It is verified that the far-field acoustic power increased with flow velocity for the lower and medium frequency range. Experimentally, an attenuation at higher frequencies is also observed. This increase is found numerically to be due primarily to the interactions between the mean vorticity and the fluctuation velocities. Spectral decomposition of the real time data indicates that the power increase occurs in the low and middle frequency range, where the local instability waves have the largest spatial growth rate. The connection between this amplification and the local instability waves is discussed.
Results obtained from a series of adiabatic and dissipative codes are described which exploit the Grad-Hogan theory of classical transient diffusion and skin penetration. The models include two-dimensions and axial symmetry, a variety of... more
Results obtained from a series of adiabatic and dissipative codes are described which exploit the Grad-Hogan theory of classical transient diffusion and skin penetration. The models include two-dimensions and axial symmetry, a variety of constraints such as specified plasma volume, toroidal and poloidal currents and fluxes, free boundaries in vacuum and force free field, limiters, etc., a variety of topologies, including time-dependent changes of topology (isolation, tearing, annihilation of islands). The principal physical results are that, under appropriate circumstances, there is significant quantitative classical disagreement with 'classical' (Pfirsch-Schlueter) diffusion and classical skin effect (current penetration). The chief numerical achievements are a family of accurate and efficient algorithms and codes which are useful in their own right and should also provide a basis for a new generation of plasma-modeling, curve-fitting codes (with added atomic physics, radiation, and anomaly factors), but with the geometry and time-dependence treated realistically.
The present 2D simulations of plane and axisymmetric jets, which are relevant to current efforts to suppress the jet exhaust noise of prospective high-speed civil transports, were conducted by solving full Navier-Stokes equations by means... more
The present 2D simulations of plane and axisymmetric jets, which are relevant to current efforts to suppress the jet exhaust noise of prospective high-speed civil transports, were conducted by solving full Navier-Stokes equations by means of a high-order finite-difference scheme. The results obtained, which were able to generate the correct mode shape after an adjustment region of about 10 diameters,
ABSTRACT
... References [1] Anderson, W., Kyle, Thomas, JL, van Leer, B., Comparison of Finite Volume Flux Vector Splittings for the Euler Equations, AIAA J., Vol. 24, pp. 1453-1460 (1986). [2] Caughey, DA, Turkel, E., Effects of Numerical ...
ABSTRACT The effects of numerical dissipation upon solutions to the Euler equations are considered, and results for transonic flows past airfoils are presented to demonstrate the effects of the dissipative terms. The equations are... more
ABSTRACT The effects of numerical dissipation upon solutions to the Euler equations are considered, and results for transonic flows past airfoils are presented to demonstrate the effects of the dissipative terms. The equations are approximated using a finite-volume spatial approximation with added dissipation provided by an adaptive mixture of second and fourth differences. The resulting difference equations are solved using either an explicit multistage Runge-Kutta method or a diagonalized implicit method. It is found that errors in surface values can be introduced by the averaging required to calculate derived quantities of interest.
The present 2D simulations of plane and axisymmetric jets, which are relevant to current efforts to suppress the jet exhaust noise of prospective high-speed civil transports, were conducted by solving full Navier-Stokes equations by means... more
The present 2D simulations of plane and axisymmetric jets, which are relevant to current efforts to suppress the jet exhaust noise of prospective high-speed civil transports, were conducted by solving full Navier-Stokes equations by means of a high-order finite-difference scheme. The results obtained, which were able to generate the correct mode shape after an adjustment region of about 10 diameters, are in good agreement with linear-theory predictions of the growth of instability waves.
We consider fourth order accurate compact schemes for numerical solutions to the Maxwell equations. We use the same mesh stencil as used in the standard Yee scheme. In particular extra information over a wider stencil is not required.... more
We consider fourth order accurate compact schemes for numerical solutions to the Maxwell equations. We use the same mesh stencil as used in the standard Yee scheme. In particular extra information over a wider stencil is not required. Hence, it is relatively easy to modify an existing code based on the Yee algorithm to make it fourth order accurate. Also, a staggered mesh makes the boundary treatment easier. Finally, a staggered grid system gives a lower error than a non-staggered system
ABSTRACT We introduce the time reversed absorbing conditions (TRAC) in time reversal methods. These new boundary conditions enable one to “recreate the past”without knowing the source that has emitted the signals that are back-propagated.... more
ABSTRACT We introduce the time reversed absorbing conditions (TRAC) in time reversal methods. These new boundary conditions enable one to “recreate the past”without knowing the source that has emitted the signals that are back-propagated. This new method does not rely on any a priori knowledge of the physical properties of the inclusion. We prove an energy es- timate for the resulting non-standard boundary value prob- lem. Numerical tests are presented in two dimensions for the wave and the Helmholtz equation. In particular the TRAC method is applied to the differentiation between a single in- clusion and a two close inclusion case. This technique is fairly insensitive with respect to noise in the data.
Research Interests:

And 240 more

The authors present a new method of writer identification, employing the full power of multiple experiments, which yields a statistically significant result. Each individual binarized and segmented character is represented as a histogram... more
The authors present a new method of writer identification, employing the full power of multiple experiments, which yields a statistically significant result. Each individual binarized and segmented character is represented as a histogram of 512 binary pixel patterns—3 × 3 black and white patches. In the process of comparing two given inscriptions under a "single author" assumption, the algorithm performs a Kolmogorov–Smirnov test for each letter and each patch. The resulting p-values are combined using Fisher's method, producing a single p-value. Experiments on both Modern and Ancient Hebrew data sets demonstrate the excellent performance and robustness of this approach.
Research Interests: