Loading…
CAA 2017 has ended
3D Acquisition/Object Modeling [clear filter]
Tuesday, March 14
 

9:50am EDT

NEH Roundtable on Advanced Challenges in Theory and Practice in 3D Modeling
In the 2015-16, the NEH Advanced Topics in the Digital Humanities Summer Institute Advanced Challenges in Theory and Practice in 3D Modeling of Cultural Heritage Sites was held jointly at the University of Massachusetts, Amherst and University of California, Los Angeles. A group of 35 national and international scholars came together to discuss what they saw as key issues facing those working with 3D content in the digital humanities (http://advancedchallenges.com.) This roundtable seeks to bring together faculty from that NEH Institute and join them to the larger CAA community to encourage a robust discussion of the key issues facing scholars working in 3D. We seek short (5 minute) papers from the CAA community on one of the following topics: (1) Metadata, (2) Publishing in 3D, Sustainability, Preservation, and Forward Migration or (3) Technology Learning Curve/Infrastructure Collaboration. A roundtable discussion with audience and panel members will follow each discussion topic. Papers could address the following questions or others…•Metadata-- What information needs to be associated with 3D content to make it usable by secondary scholars? How does the standard for data collection about a research object differ across disciplines? How could metadata for one discipline be made nimble enough to be useful for others? Considering the speed at which technology changes are operating principles more feasible than standards? • Publishing 3D Work- What does it actually mean to publish 3D work? Are we giving primacy to 3D models? What kinds of annotations do we need to support? What kinds of interactions would we want to support? How to support visual/spatial/kinetic/sequential argumentation within 3D space? How to track and support different versions of a published model or database? How to track use statistics? How does one peer review digital work that challenges the prevailing print traditions? How does the user experience impact peer review? • Sustainability, Preservation, and Forward Migration- How does 3D content stand in terms of library collection development? What kinds of 3D content are most likely to be addressed by repositories in the near future? How might scholars working with 3D artifacts influence decisions that will shape collection policies relating to this content? What does it mean to archive a 3D project? What files should be preserved? How long should content be archived? Short-term or long-term? When can archived material be de-accessioned? How do we address situations where proprietary software/platforms used for a given project is no longer available or supported by its creator? And what can we do immediately to begin preserving our own work? • Technology Learning Curve/Infrastructure for Collaboration- How to address the technology learning curve? What information can be generated/shared with subsequent scholars to encourage 3D research? Is there a standard 3D toolkit? Would it help if recommendations for project development were posted online? How to support scholars working at institutions without the infrastructure for 3D work? How to build a community of scholars working with 3D? How can we easily connect scholars across disciplinary boundaries?

Moderators
Tuesday March 14, 2017 9:50am - 12:30pm EDT
SCW 464 Student Center West
 
Wednesday, March 15
 

8:30am EDT

3D Digitization of Sculpture (Part I)
(Part I) This session is devoted to the issues arising from the digitization of a particular class of 3D object—works of sculpture. Improvements in recent years in photogrammetric modeling have made it possible for the first time to undertake wholesale 3D digitization of large collections of sculpture. For example, the session organizers are currently leading an effort to create 3D models of all the ancient sculpture in the Uffizi Galleries in Florence, Italy (ca. 1,250 objects). This session will present new work in this area of application covering topics arising at every stage in the workflow, including 3D data capture (whether using laser or structured light scanners, photogrammetry, or some combination of technologies), 3D modeling, 3D restoration (including the restoration of polychromy), optimization of 3D models, best practice for metadata and paradata, WebGL solutions for publishing 3D models online as well as AR and VR applications. We welcome submissions concerning any of these topics. Equally welcome are submissions concerning the use of 3D models of sculpture for museum education and outreach or the utility of the digital model for purposes of scholarly analysis, experimentation, and interpretation.

Moderators
Wednesday March 15, 2017 8:30am - 10:30am EDT
SCW 466/468 Student Center West

10:50am EDT

3D Digitization of Sculpture (Part II)
(Part II) This session is devoted to the issues arising from the digitization of a particular class of 3D object—works of sculpture. Improvements in recent years in photogrammetric modeling have made it possible for the first time to undertake wholesale 3D digitization of large collections of sculpture. For example, the session organizers are currently leading an effort to create 3D models of all the ancient sculpture in the Uffizi Galleries in Florence, Italy (ca. 1,250 objects). This session will present new work in this area of application covering topics arising at every stage in the workflow, including 3D data capture (whether using laser or structured light scanners, photogrammetry, or some combination of technologies), 3D modeling, 3D restoration (including the restoration of polychromy), optimization of 3D models, best practice for metadata and paradata, WebGL solutions for publishing 3D models online as well as AR and VR applications. We welcome submissions concerning any of these topics. Equally welcome are submissions concerning the use of 3D models of sculpture for museum education and outreach or the utility of the digital model for purposes of scholarly analysis, experimentation, and interpretation.

Moderators
Wednesday March 15, 2017 10:50am - 12:30pm EDT
SCW 466/468 Student Center West

2:00pm EDT

Quality Assurance and Quality Control in Image-Based Modelling (Part I)
(Part I) The increased affordability, and perceived ease of use, of technologies developed for engineering and remote sensing have led to their widespread adoption in archaeology over the past five years. In particular, low cost 3D image-based modelling solutions like Agisoft PhotoScan have made photogrammetry and “Structure from Motion” (SfM) buzzwords of the moment. The ability of such software to ingest relatively unstructured images, which lack any deliberate camera network design other than high overlap, and to yet produce visually pleasing meshed surfaces is an astonishing application of computer-vision stereo-matching algorithms originally intended for basic scene reconstruction. Indeed, some software packages or online services purport to offer high-accuracy finished meshes without any user intervention whatsoever. But as the noted Australian photogrammetrist Clive Fraser argued in an editorial in the Photogrammetric Record (30(149), March 2015, 3-7) no purely SfM commercial software packages have made it to the commercial market, nor are they any packages that can produce results of the highest orders of accuracy without significant user intervention. In order to achieve accuracy and repeatability the non-linear “thick lens” camera model used by photogrammetric engineers has had to be integrated. His editorial is at once an informative overview of a field that has witnessed continuous development from the early efforts of Albrecht Maydenbauer in the late 19th century to conduct photographic surveys of built heritage to millimetric levels of accuracy, but also a warning: without the firm grasp of the underlying principles of photogrammetric engineering, the enthusiastic adoption of techniques of computer-vision techniques can undermine decades of efforts to establish standards of photogrammetric accuracy. The rapidity of the adoption of SfM recording methods, largely, it seems because of their ease of use, has led archaeologists into dangerous territory, especially when these new technologies have wholly supplanted time-tested manual methods. Without precisely the quality control and quality assurance processes that are routinely followed in industry with these same technologies, the reliability, repeatability and long-term value of such 3D data-products may be in jeopardy. This session will consider how established QA/QC regimes can rectify this situation. These may include robust methods of camera calibration, the use of ground-control points and check-points to validate model accuracy, the archiving of photogrammetric data-sets, accurate drawing from 3D data, appropriate file-formats, the history of photogrammetry in archaeology, and discussion of what constitutes a final photogrammetric product in archaeology. The session will challenge the participants and audience to look beyond mere “visualization” as the exclusive function of SfM and image-based modelling to reliable methods that can match or exceed the quality and repeatability of traditional methods of documentation. This session will also welcome contributions from users of laser scanning as well. The advent of more affordable systems like the FARO Focus3D has led to increased adoption of laser scanning technologies in research and teaching environments. In part, due to the costing and financial structure of institutions in relation to the costing / price point of equipment. Not every laser scanner, however, is the same. In fact, there are parallels with the case of photogrammetry and computer-vision. For example, there has never been an established standard to adequately compare laser systems between manufacturers. The immediate point of reference is the manufacturer's specificifications, which are determined using different methods between manufacturer. It has always been down to the end user, research institutions like i3Mainz or laboratories, such as the Seibersdorf Laboratory in Austria, to generate comparison reports. The laser scanner part of the session welcomes case studies based around the difference in arcseconds accuracies and precision between laser scanners; experiences in using E57 as a universal file format versus .LAS; as well as best practice in target vs targetless based workflows.

Moderators
Wednesday March 15, 2017 2:00pm - 3:40pm EDT
SCW 466/468 Student Center West

4:00pm EDT

Quality Assurance and Quality Control in Image-Based Modelling (Part II)
(Part II) The increased affordability, and perceived ease of use, of technologies developed for engineering and remote sensing have led to their widespread adoption in archaeology over the past five years. In particular, low cost 3D image-based modelling solutions like Agisoft PhotoScan have made photogrammetry and “Structure from Motion” (SfM) buzzwords of the moment. The ability of such software to ingest relatively unstructured images, which lack any deliberate camera network design other than high overlap, and to yet produce visually pleasing meshed surfaces is an astonishing application of computer-vision stereo-matching algorithms originally intended for basic scene reconstruction. Indeed, some software packages or online services purport to offer high-accuracy finished meshes without any user intervention whatsoever. But as the noted Australian photogrammetrist Clive Fraser argued in an editorial in the Photogrammetric Record (30(149), March 2015, 3-7) no purely SfM commercial software packages have made it to the commercial market, nor are they any packages that can produce results of the highest orders of accuracy without significant user intervention. In order to achieve accuracy and repeatability the non-linear “thick lens” camera model used by photogrammetric engineers has had to be integrated. His editorial is at once an informative overview of a field that has witnessed continuous development from the early efforts of Albrecht Maydenbauer in the late 19th century to conduct photographic surveys of built heritage to millimetric levels of accuracy, but also a warning: without the firm grasp of the underlying principles of photogrammetric engineering, the enthusiastic adoption of techniques of computer-vision techniques can undermine decades of efforts to establish standards of photogrammetric accuracy. The rapidity of the adoption of SfM recording methods, largely, it seems because of their ease of use, has led archaeologists into dangerous territory, especially when these new technologies have wholly supplanted time-tested manual methods. Without precisely the quality control and quality assurance processes that are routinely followed in industry with these same technologies, the reliability, repeatability and long-term value of such 3D data-products may be in jeopardy. This session will consider how established QA/QC regimes can rectify this situation. These may include robust methods of camera calibration, the use of ground-control points and check-points to validate model accuracy, the archiving of photogrammetric data-sets, accurate drawing from 3D data, appropriate file-formats, the history of photogrammetry in archaeology, and discussion of what constitutes a final photogrammetric product in archaeology. The session will challenge the participants and audience to look beyond mere “visualization” as the exclusive function of SfM and image-based modelling to reliable methods that can match or exceed the quality and repeatability of traditional methods of documentation. This session will also welcome contributions from users of laser scanning as well. The advent of more affordable systems like the FARO Focus3D has led to increased adoption of laser scanning technologies in research and teaching environments. In part, due to the costing and financial structure of institutions in relation to the costing / price point of equipment. Not every laser scanner, however, is the same. In fact, there are parallels with the case of photogrammetry and computer-vision. For example, there has never been an established standard to adequately compare laser systems between manufacturers. The immediate point of reference is the manufacturer's specificifications, which are determined using different methods between manufacturer. It has always been down to the end user, research institutions like i3Mainz or laboratories, such as the Seibersdorf Laboratory in Austria, to generate comparison reports. The laser scanner part of the session welcomes case studies based around the difference in arcseconds accuracies and precision between laser scanners; experiences in using E57 as a universal file format versus .LAS; as well as best practice in target vs targetless based workflows.

Moderators
Wednesday March 15, 2017 4:00pm - 5:40pm EDT
SCW 466/468 Student Center West
 
Thursday, March 16
 

8:30am EDT

Close Range 3D Data Acquisition, Processing, Querying and Presentation in Cultural Heritage (Part I)
(Part I) This session collects contributions concerned with 3D data and how they are employed to tackle research
questions or for proficient presentation. While much attention has been paid in recent years to evaluate different methodologies of data acquisition, compare algorithms for data processing or querying of 3D data, much less attention has been given to provide information on data quality required to solve research questions. What resolution is required or useful to display data in a 3D viewer? What precision is needed for comparative study of gypsum casks? What do we learn from comparative studies with regards to required data quality? How can we analyse 3D data to answer specific archaeological questions? How to acquire color and 3D information from the same object? This session particularly invites presenters working with 3D data that can provide feedback towards the process of acquisition to optimize data collection and to identify best data collection procedure for a given project. At the same time we welcome contributions that are focused on performing analysis of 3D data to solve archaeological questions and are predominantly application based or based on acquisition methodology.

Moderators
Thursday March 16, 2017 8:30am - 10:30am EDT
SCW 466/468 Student Center West

10:50am EDT

Close Range 3D Data Acquisition, Processing, Querying and Presentation in Cultural Heritage (Part II)
(Part II) This session collects contributions concerned with 3D data and how they are employed to tackle research
questions or for proficient presentation. While much attention has been paid in recent years to evaluate different methodologies of data acquisition, compare algorithms for data processing or querying of 3D data, much less attention has been given to provide information on data quality required to solve research questions. What resolution is required or useful to display data in a 3D viewer? What precision is needed for comparative study of gypsum casks? What do we learn from comparative studies with regards to required data quality? How can we analyse 3D data to answer specific archaeological questions? How to acquire color and 3D information from the same object? This session particularly invites presenters working with 3D data that can provide feedback towards the process of acquisition to optimize data collection and to identify best data collection procedure for a given project. At the same time we welcome contributions that are focused on performing analysis of 3D data to solve archaeological questions and are predominantly application based or based on acquisition methodology.

Moderators
Thursday March 16, 2017 10:50am - 12:30pm EDT
SCW 466/468 Student Center West

2:00pm EDT

Close Range 3D Data Acquisition, Processing, Querying and Presentation in Cultural Heritage (Part III)
(Part III) This session collects contributions concerned with 3D data and how they are employed to tackle research
questions or for proficient presentation. While much attention has been paid in recent years to evaluate different methodologies of data acquisition, compare algorithms for data processing or querying of 3D data, much less attention has been given to provide information on data quality required to solve research questions. What resolution is required or useful to display data in a 3D viewer? What precision is needed for comparative study of gypsum casks? What do we learn from comparative studies with regards to required data quality? How can we analyse 3D data to answer specific archaeological questions? How to acquire color and 3D information from the same object? This session particularly invites presenters working with 3D data that can provide feedback towards the process of acquisition to optimize data collection and to identify best data collection procedure for a given project. At the same time we welcome contributions that are focused on performing analysis of 3D data to solve archaeological questions and are predominantly application based or based on acquisition methodology.

Moderators
Thursday March 16, 2017 2:00pm - 3:40pm EDT
SCW 466/468 Student Center West

2:00pm EDT

Structural Analysis for Cultural Heritage (Part I)
(Part I) Ever present in the world of cultural heritage are the challenges associated with assessment, diagnosis, and preservation of as-built infrastructure with potentially unknown materials, techniques, or damage. Historical buildings, monuments and sculptures require delicate handling. Therefore, the techniques used to capture the existing conditions must be non-destructive, though at the same time must acquire accurate information at the surface, subsurface and volumetric levels. Collaboration between engineers, scientists, historians, and other stakeholders can reach beyond documentation and visualization towards the production of actionable data on the current “state of health” of buildings, monuments, and artworks as well as predict how structures or their constituent elements might respond to theoretical stresses in the future. Potential topics include modeling at different scales (micro vs. macro), characterization of the effects of common forces (seismic, subsidence, weathering, vandalism, etc.) and their potential impact, as well as structural monitoring and lifecycle management. Recent and ongoing research explores the application of Building Information Modeling (BIM), Finite Element Analysis (FEA), and other analytical approaches for cultural heritage. Technology must be leveraged to aid in modeling and simulating problematic aspects such as heterogeneous materials, existing damage patterns, seismic vulnerability, and unknown construction techniques. Structural engineering methods and software tools better enable cultural heritage practitioners to make informed decisions through understanding how the built environment responds to the always present forces that shape it.

Moderators
avatar for Dominique Risollo

Dominique Risollo

Archaeologist and Special Projects Coordinator, University of California, San Diego
I am affiliated with the Center of Interdisciplinary Science for Art, Architecture, and Archaeology (CISA3) at UC San Diego and work closely with students and institutional partners on projects related to diagnostic imaging and analytical diagnostics for cultural heritage.

Thursday March 16, 2017 2:00pm - 3:40pm EDT
SCW 460/462 Student Center West

4:00pm EDT

Close Range 3D Data Acquisition, Processing, Querying and Presentation in Cultural Heritage (Part IV)
(Part IV) This session collects contributions concerned with 3D data and how they are employed to tackle research
questions or for proficient presentation. While much attention has been paid in recent years to evaluate different methodologies of data acquisition, compare algorithms for data processing or querying of 3D data, much less attention has been given to provide information on data quality required to solve research questions. What resolution is required or useful to display data in a 3D viewer? What precision is needed for comparative study of gypsum casks? What do we learn from comparative studies with regards to required data quality? How can we analyse 3D data to answer specific archaeological questions? How to acquire color and 3D information from the same object? This session particularly invites presenters working with 3D data that can provide feedback towards the process of acquisition to optimize data collection and to identify best data collection procedure for a given project. At the same time we welcome contributions that are focused on performing analysis of 3D data to solve archaeological questions and are predominantly application based or based on acquisition methodology.

Moderators
Thursday March 16, 2017 4:00pm - 6:00pm EDT
SCW 466/468 Student Center West

4:00pm EDT

Structural Analysis for Cultural Heritage (Part II)
(Part II) Ever present in the world of cultural heritage are the challenges associated with assessment, diagnosis, and preservation of as-built infrastructure with potentially unknown materials, techniques, or damage. Historical buildings, monuments and sculptures require delicate handling. Therefore, the techniques used to capture the existing conditions must be non-destructive, though at the same time must acquire accurate information at the surface, subsurface and volumetric levels. Collaboration between engineers, scientists, historians, and other stakeholders can reach beyond documentation and visualization towards the production of actionable data on the current “state of health” of buildings, monuments, and artworks as well as predict how structures or their constituent elements might respond to theoretical stresses in the future. Potential topics include modeling at different scales (micro vs. macro), characterization of the effects of common forces (seismic, subsidence, weathering, vandalism, etc.) and their potential impact, as well as structural monitoring and lifecycle management. Recent and ongoing research explores the application of Building Information Modeling (BIM), Finite Element Analysis (FEA), and other analytical approaches for cultural heritage. Technology must be leveraged to aid in modeling and simulating problematic aspects such as heterogeneous materials, existing damage patterns, seismic vulnerability, and unknown construction techniques. Structural engineering methods and software tools better enable cultural heritage practitioners to make informed decisions through understanding how the built environment responds to the always present forces that shape it.

Moderators
avatar for Dominique Risollo

Dominique Risollo

Archaeologist and Special Projects Coordinator, University of California, San Diego
I am affiliated with the Center of Interdisciplinary Science for Art, Architecture, and Archaeology (CISA3) at UC San Diego and work closely with students and institutional partners on projects related to diagnostic imaging and analytical diagnostics for cultural heritage.

Thursday March 16, 2017 4:00pm - 6:00pm EDT
SCW 460/462 Student Center West
 
Filter sessions
Apply filters to sessions.