@findex bfd_get_mtime @subsubsection @code{bfd_get_mtime} @strong{Synopsis} @example long bfd_get_mtime (bfd *abfd); @end example @strong{Description}@* Return the file modification time (as read from the file system, or from the archive header for archive members). @findex bfd_get_size @subsubsection @code{bfd_get_size} @strong{Synopsis} @example long bfd_get_size (bfd *abfd); @end example @strong{Description}@* Return the file size (as read from file system) for the file associated with BFD @var{abfd}. The initial motivation for, and use of, this routine is not so we can get the exact size of the object the BFD applies to, since that might not be generally possible (archive members for example). It would be ideal if someone could eventually modify it so that such results were guaranteed. Instead, we want to ask questions like "is this NNN byte sized object I'm about to try read from file offset YYY reasonable?" As as example of where we might do this, some object formats use string tables for which the first @code{sizeof (long)} bytes of the table contain the size of the table itself, including the size bytes. If an application tries to read what it thinks is one of these string tables, without some way to validate the size, and for some reason the size is wrong (byte swapping error, wrong location for the string table, etc.), the only clue is likely to be a read error when it tries to read the table, or a "virtual memory exhausted" error when it tries to allocate 15 bazillon bytes of space for the 15 bazillon byte table it is about to read. This function at least allows us to answer the question, "is the size reasonable?".