Large Text file imports

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Large Text file imports

Alan Barker
Experts -

We've been wrestling with this problem for a while.  We are an All Windows shop, with 500 Volume License users, but normally <200 concurrent users on the server at any one time.

On a "development" machine of mine, we perform large database builds (huge text file imports from Oracle Bill-of-material extracts).  The last stable version of FPA that works with these files was 13.05.  v14, v15, and now v16 all choke and die sometime during the huge file import at somewhere near row 1Million.  Not always the same place, it is inconsistent, but it will fail every single time.

I would LOVE to upgrade my production server from 15 to 16, but I can not do that because Filemaker Server-v16 will now not allow v13.05 clients.

I'm stuck between a rock and a hard place.  FMI acknowledge that there is a limitation to large file imports, so I can not use FPA-16 for this build.  But I can't use v13.05 (which works perfectly) if I upgrade my server to v16.  [My build process needs to be able to link to hosted solutions for data importing purposes.

I have my build machine running on a Windows server 2015 VM with 8GB of RAM.  Is there ANY hope at all that I can crank up the available RAM, and get around this problem I'm running into?

A number of people have suggested that I hack the input files down to smaller chunks, but there are dozens of these kind of jobs that I don't want to have to re-invent, because FMI has introduced an inferior product.  It worked fine for 10 version, but has worked in the last 3.

Thanks for all suggestions,

Alan Barker
_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Large Text file imports

Richard DeShong
Hi Alan,
Just asking for more details on the import.
To confirm:
    You have a large text file with, at times, a million plus lines. Tab
or some other delimited.
    You are using FMP Advanced (currently v13.05).

Do you have a local fm db file into which you import the text file, or
are you importing into a server hosted file?

For each line in the text file, are you using the text file delimiters
to import directly into fields, or are you importing each line into a
single text field, and then parsing the imported line into fields?

On 8/7/2017 1:38 PM, Barker, Al [NMCA-STL] wrote:

> Experts -
>
> We've been wrestling with this problem for a while.  We are an All Windows shop, with 500 Volume License users, but normally <200 concurrent users on the server at any one time.
>
> On a "development" machine of mine, we perform large database builds (huge text file imports from Oracle Bill-of-material extracts).  The last stable version of FPA that works with these files was 13.05.  v14, v15, and now v16 all choke and die sometime during the huge file import at somewhere near row 1Million.  Not always the same place, it is inconsistent, but it will fail every single time.
>
> I would LOVE to upgrade my production server from 15 to 16, but I can not do that because Filemaker Server-v16 will now not allow v13.05 clients.
>
> I'm stuck between a rock and a hard place.  FMI acknowledge that there is a limitation to large file imports, so I can not use FPA-16 for this build.  But I can't use v13.05 (which works perfectly) if I upgrade my server to v16.  [My build process needs to be able to link to hosted solutions for data importing purposes.
>
> I have my build machine running on a Windows server 2015 VM with 8GB of RAM.  Is there ANY hope at all that I can crank up the available RAM, and get around this problem I'm running into?
>
> A number of people have suggested that I hack the input files down to smaller chunks, but there are dozens of these kind of jobs that I don't want to have to re-invent, because FMI has introduced an inferior product.  It worked fine for 10 version, but has worked in the last 3.
>
> Thanks for all suggestions,
>
> Alan Barker
> _______________________________________________
> FMPexperts mailing list
> [hidden email]
> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au

--
Richard DeShong
Logic Tools
510-642-5123 office
925-285-1088 cell

_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: Large Text file imports

Alan Barker
Hi Richard,

I am processing the file into a local Filemaker database, but it does need to be able to link to my production server, as there are looked up values that get incorporated into the local file.  I am using the delimiters to direct the values into fields, which has worked well for more than a dozen years.

Alan

-----Original Message-----
From: Richard DeShong [mailto:[hidden email]]
Sent: Monday, August 07, 2017 6:09 PM
To: [hidden email]
Subject: Re: Large Text file imports

Hi Alan,
Just asking for more details on the import.
To confirm:
    You have a large text file with, at times, a million plus lines. Tab or some other delimited.
    You are using FMP Advanced (currently v13.05).

Do you have a local fm db file into which you import the text file, or are you importing into a server hosted file?

For each line in the text file, are you using the text file delimiters to import directly into fields, or are you importing each line into a single text field, and then parsing the imported line into fields?

On 8/7/2017 1:38 PM, Barker, Al [NMCA-STL] wrote:

> Experts -
>
> We've been wrestling with this problem for a while.  We are an All Windows shop, with 500 Volume License users, but normally <200 concurrent users on the server at any one time.
>
> On a "development" machine of mine, we perform large database builds (huge text file imports from Oracle Bill-of-material extracts).  The last stable version of FPA that works with these files was 13.05.  v14, v15, and now v16 all choke and die sometime during the huge file import at somewhere near row 1Million.  Not always the same place, it is inconsistent, but it will fail every single time.
>
> I would LOVE to upgrade my production server from 15 to 16, but I can not do that because Filemaker Server-v16 will now not allow v13.05 clients.
>
> I'm stuck between a rock and a hard place.  FMI acknowledge that there is a limitation to large file imports, so I can not use FPA-16 for this build.  But I can't use v13.05 (which works perfectly) if I upgrade my server to v16.  [My build process needs to be able to link to hosted solutions for data importing purposes.
>
> I have my build machine running on a Windows server 2015 VM with 8GB of RAM.  Is there ANY hope at all that I can crank up the available RAM, and get around this problem I'm running into?
>
> A number of people have suggested that I hack the input files down to smaller chunks, but there are dozens of these kind of jobs that I don't want to have to re-invent, because FMI has introduced an inferior product.  It worked fine for 10 version, but has worked in the last 3.
>
> Thanks for all suggestions,
>
> Alan Barker
> _______________________________________________
> FMPexperts mailing list
> [hidden email]
> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au

--
Richard DeShong
Logic Tools
510-642-5123 office
925-285-1088 cell


_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Large Text file imports

Richard DeShong
Hi Alan,

As a test, could you re-create a new file with just the import table and
no connection to the server.  Turn off all auto-calc's. Can v14-16
handle the import?

The two obvious things for me would be a memory variable that is getting
clobbered by some process, or a timing issue between the main process
(import) and the sub-processes (lookups).  Based on the issues I've seen
in various releases, I'm guessing the later. It's possible that the
stack array that manages the sub-processes is too small.


On 8/9/2017 6:04 AM, Barker, Al [NMCA-STL] wrote:

> Hi Richard,
>
> I am processing the file into a local Filemaker database, but it does need to be able to link to my production server, as there are looked up values that get incorporated into the local file.  I am using the delimiters to direct the values into fields, which has worked well for more than a dozen years.
>
> Alan
>
> -----Original Message-----
> From: Richard DeShong [mailto:[hidden email]]
> Sent: Monday, August 07, 2017 6:09 PM
> To: [hidden email]
> Subject: Re: Large Text file imports
>
> Hi Alan,
> Just asking for more details on the import.
> To confirm:
>      You have a large text file with, at times, a million plus lines. Tab or some other delimited.
>      You are using FMP Advanced (currently v13.05).
>
> Do you have a local fm db file into which you import the text file, or are you importing into a server hosted file?
>
> For each line in the text file, are you using the text file delimiters to import directly into fields, or are you importing each line into a single text field, and then parsing the imported line into fields?
>
> On 8/7/2017 1:38 PM, Barker, Al [NMCA-STL] wrote:
>> Experts -
>>
>> We've been wrestling with this problem for a while.  We are an All Windows shop, with 500 Volume License users, but normally <200 concurrent users on the server at any one time.
>>
>> On a "development" machine of mine, we perform large database builds (huge text file imports from Oracle Bill-of-material extracts).  The last stable version of FPA that works with these files was 13.05.  v14, v15, and now v16 all choke and die sometime during the huge file import at somewhere near row 1Million.  Not always the same place, it is inconsistent, but it will fail every single time.
>>
>> I would LOVE to upgrade my production server from 15 to 16, but I can not do that because Filemaker Server-v16 will now not allow v13.05 clients.
>>
>> I'm stuck between a rock and a hard place.  FMI acknowledge that there is a limitation to large file imports, so I can not use FPA-16 for this build.  But I can't use v13.05 (which works perfectly) if I upgrade my server to v16.  [My build process needs to be able to link to hosted solutions for data importing purposes.
>>
>> I have my build machine running on a Windows server 2015 VM with 8GB of RAM.  Is there ANY hope at all that I can crank up the available RAM, and get around this problem I'm running into?
>>
>> A number of people have suggested that I hack the input files down to smaller chunks, but there are dozens of these kind of jobs that I don't want to have to re-invent, because FMI has introduced an inferior product.  It worked fine for 10 version, but has worked in the last 3.
>>
>> Thanks for all suggestions,
>>
>> Alan Barker
>> _______________________________________________
>> FMPexperts mailing list
>> [hidden email]
>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
> --
> Richard DeShong
> Logic Tools
> 510-642-5123 office
> 925-285-1088 cell
>
>
> _______________________________________________
> FMPexperts mailing list
> [hidden email]
> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au

--
Richard DeShong
Logic Tools
510-642-5123 office
925-285-1088 cell

_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Large Text file imports

Richard DeShong
btw: it could also be simply a memory issue - FM client does not allow
enough memory for a particular variable during the import.

Also, if FM can't resolve it, you could incorporate a file-splitter
(such as GSplit) into the import process.  You can use the splitter to
chop up the original file into chunks with x number of lines. This would
mean that the import and process routine for a file is inside a loop to
process all the "chunks":

GSplit ImportFile /<various options>
ChunkNum = 1
LOOP to import a file
    Calc Chunk filename
    ExitLoop if Chunk does not exist
    Import Specified Chunk
    Process Chunk
    ChunkNum = +1
ENDLOOP

Still automated, just more managed.  Depending on what is in each line
of the bom files, it could be fairly straightforward (but that's just me
being optimistic).

On 8/9/2017 9:48 AM, Richard DeShong wrote:

> Hi Alan,
>
> As a test, could you re-create a new file with just the import table
> and no connection to the server.  Turn off all auto-calc's. Can v14-16
> handle the import?
>
> The two obvious things for me would be a memory variable that is
> getting clobbered by some process, or a timing issue between the main
> process (import) and the sub-processes (lookups).  Based on the issues
> I've seen in various releases, I'm guessing the later. It's possible
> that the stack array that manages the sub-processes is too small.
>
>
> On 8/9/2017 6:04 AM, Barker, Al [NMCA-STL] wrote:
>> Hi Richard,
>>
>> I am processing the file into a local Filemaker database, but it does
>> need to be able to link to my production server, as there are looked
>> up values that get incorporated into the local file. I am using the
>> delimiters to direct the values into fields, which has worked well
>> for more than a dozen years.
>>
>> Alan
>>
>> -----Original Message-----
>> From: Richard DeShong [mailto:[hidden email]]
>> Sent: Monday, August 07, 2017 6:09 PM
>> To: [hidden email]
>> Subject: Re: Large Text file imports
>>
>> Hi Alan,
>> Just asking for more details on the import.
>> To confirm:
>>      You have a large text file with, at times, a million plus lines.
>> Tab or some other delimited.
>>      You are using FMP Advanced (currently v13.05).
>>
>> Do you have a local fm db file into which you import the text file,
>> or are you importing into a server hosted file?
>>
>> For each line in the text file, are you using the text file
>> delimiters to import directly into fields, or are you importing each
>> line into a single text field, and then parsing the imported line
>> into fields?
>>
>> On 8/7/2017 1:38 PM, Barker, Al [NMCA-STL] wrote:
>>> Experts -
>>>
>>> We've been wrestling with this problem for a while.  We are an All
>>> Windows shop, with 500 Volume License users, but normally <200
>>> concurrent users on the server at any one time.
>>>
>>> On a "development" machine of mine, we perform large database builds
>>> (huge text file imports from Oracle Bill-of-material extracts).  The
>>> last stable version of FPA that works with these files was 13.05.  
>>> v14, v15, and now v16 all choke and die sometime during the huge
>>> file import at somewhere near row 1Million.  Not always the same
>>> place, it is inconsistent, but it will fail every single time.
>>>
>>> I would LOVE to upgrade my production server from 15 to 16, but I
>>> can not do that because Filemaker Server-v16 will now not allow
>>> v13.05 clients.
>>>
>>> I'm stuck between a rock and a hard place.  FMI acknowledge that
>>> there is a limitation to large file imports, so I can not use FPA-16
>>> for this build.  But I can't use v13.05 (which works perfectly) if I
>>> upgrade my server to v16.  [My build process needs to be able to
>>> link to hosted solutions for data importing purposes.
>>>
>>> I have my build machine running on a Windows server 2015 VM with 8GB
>>> of RAM.  Is there ANY hope at all that I can crank up the available
>>> RAM, and get around this problem I'm running into?
>>>
>>> A number of people have suggested that I hack the input files down
>>> to smaller chunks, but there are dozens of these kind of jobs that I
>>> don't want to have to re-invent, because FMI has introduced an
>>> inferior product.  It worked fine for 10 version, but has worked in
>>> the last 3.
>>>
>>> Thanks for all suggestions,
>>>
>>> Alan Barker
>>> _______________________________________________
>>> FMPexperts mailing list
>>> [hidden email]
>>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>> --
>> Richard DeShong
>> Logic Tools
>> 510-642-5123 office
>> 925-285-1088 cell
>>
>>
>> _______________________________________________
>> FMPexperts mailing list
>> [hidden email]
>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>

--
Richard DeShong
Logic Tools
510-642-5123 office
925-285-1088 cell

_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: Large Text file imports

Alan Barker
Richard -

Thank you for your advice, it will be a few days before I can try any of these tests, production builds take priority.  I did go back and crank the Filemaker cache up as far as it would go (2 GB), but it still failed, even though this one particular test file is only 265 Meg in size.

Your suggestion about turning off all Lookups at initial file load I think holds promise.  I'd still have to perform the lookups, but it could be outside of the import step.  I'll report my results when I've been able to test it.

Alan

-----Original Message-----
From: Richard DeShong [mailto:[hidden email]]
Sent: Wednesday, August 09, 2017 12:10 PM
To: [hidden email]
Subject: Re: Large Text file imports

btw: it could also be simply a memory issue - FM client does not allow enough memory for a particular variable during the import.

Also, if FM can't resolve it, you could incorporate a file-splitter (such as GSplit) into the import process.  You can use the splitter to chop up the original file into chunks with x number of lines. This would mean that the import and process routine for a file is inside a loop to process all the "chunks":

GSplit ImportFile /<various options>
ChunkNum = 1
LOOP to import a file
    Calc Chunk filename
    ExitLoop if Chunk does not exist
    Import Specified Chunk
    Process Chunk
    ChunkNum = +1
ENDLOOP

Still automated, just more managed.  Depending on what is in each line of the bom files, it could be fairly straightforward (but that's just me being optimistic).

On 8/9/2017 9:48 AM, Richard DeShong wrote:

> Hi Alan,
>
> As a test, could you re-create a new file with just the import table
> and no connection to the server.  Turn off all auto-calc's. Can v14-16
> handle the import?
>
> The two obvious things for me would be a memory variable that is
> getting clobbered by some process, or a timing issue between the main
> process (import) and the sub-processes (lookups).  Based on the issues
> I've seen in various releases, I'm guessing the later. It's possible
> that the stack array that manages the sub-processes is too small.
>
>
> On 8/9/2017 6:04 AM, Barker, Al [NMCA-STL] wrote:
>> Hi Richard,
>>
>> I am processing the file into a local Filemaker database, but it does
>> need to be able to link to my production server, as there are looked
>> up values that get incorporated into the local file. I am using the
>> delimiters to direct the values into fields, which has worked well
>> for more than a dozen years.
>>
>> Alan
>>
>> -----Original Message-----
>> From: Richard DeShong [mailto:[hidden email]]
>> Sent: Monday, August 07, 2017 6:09 PM
>> To: [hidden email]
>> Subject: Re: Large Text file imports
>>
>> Hi Alan,
>> Just asking for more details on the import.
>> To confirm:
>>      You have a large text file with, at times, a million plus lines.
>> Tab or some other delimited.
>>      You are using FMP Advanced (currently v13.05).
>>
>> Do you have a local fm db file into which you import the text file,
>> or are you importing into a server hosted file?
>>
>> For each line in the text file, are you using the text file
>> delimiters to import directly into fields, or are you importing each
>> line into a single text field, and then parsing the imported line
>> into fields?
>>
>> On 8/7/2017 1:38 PM, Barker, Al [NMCA-STL] wrote:
>>> Experts -
>>>
>>> We've been wrestling with this problem for a while.  We are an All
>>> Windows shop, with 500 Volume License users, but normally <200
>>> concurrent users on the server at any one time.
>>>
>>> On a "development" machine of mine, we perform large database builds
>>> (huge text file imports from Oracle Bill-of-material extracts).  The
>>> last stable version of FPA that works with these files was 13.05.
>>> v14, v15, and now v16 all choke and die sometime during the huge
>>> file import at somewhere near row 1Million.  Not always the same
>>> place, it is inconsistent, but it will fail every single time.
>>>
>>> I would LOVE to upgrade my production server from 15 to 16, but I
>>> can not do that because Filemaker Server-v16 will now not allow
>>> v13.05 clients.
>>>
>>> I'm stuck between a rock and a hard place.  FMI acknowledge that
>>> there is a limitation to large file imports, so I can not use FPA-16
>>> for this build.  But I can't use v13.05 (which works perfectly) if I
>>> upgrade my server to v16.  [My build process needs to be able to
>>> link to hosted solutions for data importing purposes.
>>>
>>> I have my build machine running on a Windows server 2015 VM with 8GB
>>> of RAM.  Is there ANY hope at all that I can crank up the available
>>> RAM, and get around this problem I'm running into?
>>>
>>> A number of people have suggested that I hack the input files down
>>> to smaller chunks, but there are dozens of these kind of jobs that I
>>> don't want to have to re-invent, because FMI has introduced an
>>> inferior product.  It worked fine for 10 version, but has worked in
>>> the last 3.
>>>
>>> Thanks for all suggestions,
>>>
>>> Alan Barker
>>> _______________________________________________
>>> FMPexperts mailing list
>>> [hidden email]
>>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>> --
>> Richard DeShong
>> Logic Tools
>> 510-642-5123 office
>> 925-285-1088 cell
>>
>>
>> _______________________________________________
>> FMPexperts mailing list
>> [hidden email]
>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>

--
Richard DeShong
Logic Tools
510-642-5123 office
925-285-1088 cell


_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Large Text file imports

Richard DeShong
Alan -

I'm pretty sure that the size of FM's cache is not the issue - although
cache management could be.  I have noticed (starting in v11) that FM's
memory management is not as robust as it could be. This is not related
to record data (which is what the cache is for), but the internal
management variables.

In general terms:  When you create a program such as the FM client, it's
very similar to way we design an application using the FM client.  There
is code logic which can be equated to scripts, the user interface, and
there is a data structure that can be equated to the tables we create -
except that it's all in memory.  And just like when you and I choose a
particular field type, the programmers of the FM client also choose
variable types.  And just like plugins, a programmer can choose various
code libraries.  Those decisions result in inherited limitations in the
resulting product.  Users that do something that the programmer did not
think about, can run into those limitations.



On 8/9/2017 1:32 PM, Barker, Al [NMCA-STL] wrote:

> Richard -
>
> Thank you for your advice, it will be a few days before I can try any of these tests, production builds take priority.  I did go back and crank the Filemaker cache up as far as it would go (2 GB), but it still failed, even though this one particular test file is only 265 Meg in size.
>
> Your suggestion about turning off all Lookups at initial file load I think holds promise.  I'd still have to perform the lookups, but it could be outside of the import step.  I'll report my results when I've been able to test it.
>
> Alan
>
> -----Original Message-----
> From: Richard DeShong [mailto:[hidden email]]
> Sent: Wednesday, August 09, 2017 12:10 PM
> To: [hidden email]
> Subject: Re: Large Text file imports
>
> btw: it could also be simply a memory issue - FM client does not allow enough memory for a particular variable during the import.
>
> Also, if FM can't resolve it, you could incorporate a file-splitter (such as GSplit) into the import process.  You can use the splitter to chop up the original file into chunks with x number of lines. This would mean that the import and process routine for a file is inside a loop to process all the "chunks":
>
> GSplit ImportFile /<various options>
> ChunkNum = 1
> LOOP to import a file
>      Calc Chunk filename
>      ExitLoop if Chunk does not exist
>      Import Specified Chunk
>      Process Chunk
>      ChunkNum = +1
> ENDLOOP
>
> Still automated, just more managed.  Depending on what is in each line of the bom files, it could be fairly straightforward (but that's just me being optimistic).
>
> On 8/9/2017 9:48 AM, Richard DeShong wrote:
>> Hi Alan,
>>
>> As a test, could you re-create a new file with just the import table
>> and no connection to the server.  Turn off all auto-calc's. Can v14-16
>> handle the import?
>>
>> The two obvious things for me would be a memory variable that is
>> getting clobbered by some process, or a timing issue between the main
>> process (import) and the sub-processes (lookups).  Based on the issues
>> I've seen in various releases, I'm guessing the later. It's possible
>> that the stack array that manages the sub-processes is too small.
>>
>>
>> On 8/9/2017 6:04 AM, Barker, Al [NMCA-STL] wrote:
>>> Hi Richard,
>>>
>>> I am processing the file into a local Filemaker database, but it does
>>> need to be able to link to my production server, as there are looked
>>> up values that get incorporated into the local file. I am using the
>>> delimiters to direct the values into fields, which has worked well
>>> for more than a dozen years.
>>>
>>> Alan
>>>
>>> -----Original Message-----
>>> From: Richard DeShong [mailto:[hidden email]]
>>> Sent: Monday, August 07, 2017 6:09 PM
>>> To: [hidden email]
>>> Subject: Re: Large Text file imports
>>>
>>> Hi Alan,
>>> Just asking for more details on the import.
>>> To confirm:
>>>       You have a large text file with, at times, a million plus lines.
>>> Tab or some other delimited.
>>>       You are using FMP Advanced (currently v13.05).
>>>
>>> Do you have a local fm db file into which you import the text file,
>>> or are you importing into a server hosted file?
>>>
>>> For each line in the text file, are you using the text file
>>> delimiters to import directly into fields, or are you importing each
>>> line into a single text field, and then parsing the imported line
>>> into fields?
>>>
>>> On 8/7/2017 1:38 PM, Barker, Al [NMCA-STL] wrote:
>>>> Experts -
>>>>
>>>> We've been wrestling with this problem for a while.  We are an All
>>>> Windows shop, with 500 Volume License users, but normally <200
>>>> concurrent users on the server at any one time.
>>>>
>>>> On a "development" machine of mine, we perform large database builds
>>>> (huge text file imports from Oracle Bill-of-material extracts).  The
>>>> last stable version of FPA that works with these files was 13.05.
>>>> v14, v15, and now v16 all choke and die sometime during the huge
>>>> file import at somewhere near row 1Million.  Not always the same
>>>> place, it is inconsistent, but it will fail every single time.
>>>>
>>>> I would LOVE to upgrade my production server from 15 to 16, but I
>>>> can not do that because Filemaker Server-v16 will now not allow
>>>> v13.05 clients.
>>>>
>>>> I'm stuck between a rock and a hard place.  FMI acknowledge that
>>>> there is a limitation to large file imports, so I can not use FPA-16
>>>> for this build.  But I can't use v13.05 (which works perfectly) if I
>>>> upgrade my server to v16.  [My build process needs to be able to
>>>> link to hosted solutions for data importing purposes.
>>>>
>>>> I have my build machine running on a Windows server 2015 VM with 8GB
>>>> of RAM.  Is there ANY hope at all that I can crank up the available
>>>> RAM, and get around this problem I'm running into?
>>>>
>>>> A number of people have suggested that I hack the input files down
>>>> to smaller chunks, but there are dozens of these kind of jobs that I
>>>> don't want to have to re-invent, because FMI has introduced an
>>>> inferior product.  It worked fine for 10 version, but has worked in
>>>> the last 3.
>>>>
>>>> Thanks for all suggestions,
>>>>
>>>> Alan Barker
>>>> _______________________________________________
>>>> FMPexperts mailing list
>>>> [hidden email]
>>>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>>> --
>>> Richard DeShong
>>> Logic Tools
>>> 510-642-5123 office
>>> 925-285-1088 cell
>>>
>>>
>>> _______________________________________________
>>> FMPexperts mailing list
>>> [hidden email]
>>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
> --
> Richard DeShong
> Logic Tools
> 510-642-5123 office
> 925-285-1088 cell
>
>
> _______________________________________________
> FMPexperts mailing list
> [hidden email]
> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au

--
Richard DeShong
Logic Tools
510-642-5123 office
925-285-1088 cell

_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Large Text file imports

John Weinshel
Interesting thread, and my first question was 'what changed?'

From Al's description, nothing on his side. If that's true, then some
change in the application is the culprit. My understanding is that they
have indeed been tweaking the client cache over the past few releases, and
perhaps that has worked to the disadvantage of this edge case. That may
have been confirmed by FMI when 'FMI acknowledge[d] that there is a
limitation to large file imports'.

Which doesn't undercut Richard's suggestion to kill auto-entering, a good
idea in any scenario involving heavy importing. It's possible
auto-entering was hurting the process in earlier versions, but not enough
to make it fail.

I'd be interested to see if there were any difference (before changing the
auto-enters) if the import were performed on the server.

Al, how big (width and number of rows) are these files?

John

On 8/9/17, 2:00 PM, "Richard DeShong" <[hidden email]> wrote:

>Alan -
>
>I'm pretty sure that the size of FM's cache is not the issue - although
>cache management could be.  I have noticed (starting in v11) that FM's
>memory management is not as robust as it could be. This is not related
>to record data (which is what the cache is for), but the internal
>management variables.
>
>In general terms:  When you create a program such as the FM client, it's
>very similar to way we design an application using the FM client.  There
>is code logic which can be equated to scripts, the user interface, and
>there is a data structure that can be equated to the tables we create -
>except that it's all in memory.  And just like when you and I choose a
>particular field type, the programmers of the FM client also choose
>variable types.  And just like plugins, a programmer can choose various
>code libraries.  Those decisions result in inherited limitations in the
>resulting product.  Users that do something that the programmer did not
>think about, can run into those limitations.
>
>
>
>On 8/9/2017 1:32 PM, Barker, Al [NMCA-STL] wrote:
>> Richard -
>>
>> Thank you for your advice, it will be a few days before I can try any
>>of these tests, production builds take priority.  I did go back and
>>crank the Filemaker cache up as far as it would go (2 GB), but it still
>>failed, even though this one particular test file is only 265 Meg in
>>size.
>>
>> Your suggestion about turning off all Lookups at initial file load I
>>think holds promise.  I'd still have to perform the lookups, but it
>>could be outside of the import step.  I'll report my results when I've
>>been able to test it.
>>
>> Alan
>>
>> -----Original Message-----
>> From: Richard DeShong [mailto:[hidden email]]
>> Sent: Wednesday, August 09, 2017 12:10 PM
>> To: [hidden email]
>> Subject: Re: Large Text file imports
>>
>> btw: it could also be simply a memory issue - FM client does not allow
>>enough memory for a particular variable during the import.
>>
>> Also, if FM can't resolve it, you could incorporate a file-splitter
>>(such as GSplit) into the import process.  You can use the splitter to
>>chop up the original file into chunks with x number of lines. This would
>>mean that the import and process routine for a file is inside a loop to
>>process all the "chunks":
>>
>> GSplit ImportFile /<various options>
>> ChunkNum = 1
>> LOOP to import a file
>>      Calc Chunk filename
>>      ExitLoop if Chunk does not exist
>>      Import Specified Chunk
>>      Process Chunk
>>      ChunkNum = +1
>> ENDLOOP
>>
>> Still automated, just more managed.  Depending on what is in each line
>>of the bom files, it could be fairly straightforward (but that's just me
>>being optimistic).
>>
>> On 8/9/2017 9:48 AM, Richard DeShong wrote:
>>> Hi Alan,
>>>
>>> As a test, could you re-create a new file with just the import table
>>> and no connection to the server.  Turn off all auto-calc's. Can v14-16
>>> handle the import?
>>>
>>> The two obvious things for me would be a memory variable that is
>>> getting clobbered by some process, or a timing issue between the main
>>> process (import) and the sub-processes (lookups).  Based on the issues
>>> I've seen in various releases, I'm guessing the later. It's possible
>>> that the stack array that manages the sub-processes is too small.
>>>
>>>
>>> On 8/9/2017 6:04 AM, Barker, Al [NMCA-STL] wrote:
>>>> Hi Richard,
>>>>
>>>> I am processing the file into a local Filemaker database, but it does
>>>> need to be able to link to my production server, as there are looked
>>>> up values that get incorporated into the local file. I am using the
>>>> delimiters to direct the values into fields, which has worked well
>>>> for more than a dozen years.
>>>>
>>>> Alan
>>>>
>>>> -----Original Message-----
>>>> From: Richard DeShong [mailto:[hidden email]]
>>>> Sent: Monday, August 07, 2017 6:09 PM
>>>> To: [hidden email]
>>>> Subject: Re: Large Text file imports
>>>>
>>>> Hi Alan,
>>>> Just asking for more details on the import.
>>>> To confirm:
>>>>       You have a large text file with, at times, a million plus lines.
>>>> Tab or some other delimited.
>>>>       You are using FMP Advanced (currently v13.05).
>>>>
>>>> Do you have a local fm db file into which you import the text file,
>>>> or are you importing into a server hosted file?
>>>>
>>>> For each line in the text file, are you using the text file
>>>> delimiters to import directly into fields, or are you importing each
>>>> line into a single text field, and then parsing the imported line
>>>> into fields?
>>>>
>>>> On 8/7/2017 1:38 PM, Barker, Al [NMCA-STL] wrote:
>>>>> Experts -
>>>>>
>>>>> We've been wrestling with this problem for a while.  We are an All
>>>>> Windows shop, with 500 Volume License users, but normally <200
>>>>> concurrent users on the server at any one time.
>>>>>
>>>>> On a "development" machine of mine, we perform large database builds
>>>>> (huge text file imports from Oracle Bill-of-material extracts).  The
>>>>> last stable version of FPA that works with these files was 13.05.
>>>>> v14, v15, and now v16 all choke and die sometime during the huge
>>>>> file import at somewhere near row 1Million.  Not always the same
>>>>> place, it is inconsistent, but it will fail every single time.
>>>>>
>>>>> I would LOVE to upgrade my production server from 15 to 16, but I
>>>>> can not do that because Filemaker Server-v16 will now not allow
>>>>> v13.05 clients.
>>>>>
>>>>> I'm stuck between a rock and a hard place.  FMI acknowledge that
>>>>> there is a limitation to large file imports, so I can not use FPA-16
>>>>> for this build.  But I can't use v13.05 (which works perfectly) if I
>>>>> upgrade my server to v16.  [My build process needs to be able to
>>>>> link to hosted solutions for data importing purposes.
>>>>>
>>>>> I have my build machine running on a Windows server 2015 VM with 8GB
>>>>> of RAM.  Is there ANY hope at all that I can crank up the available
>>>>> RAM, and get around this problem I'm running into?
>>>>>
>>>>> A number of people have suggested that I hack the input files down
>>>>> to smaller chunks, but there are dozens of these kind of jobs that I
>>>>> don't want to have to re-invent, because FMI has introduced an
>>>>> inferior product.  It worked fine for 10 version, but has worked in
>>>>> the last 3.
>>>>>
>>>>> Thanks for all suggestions,
>>>>>
>>>>> Alan Barker
>>>>> _______________________________________________
>>>>> FMPexperts mailing list
>>>>> [hidden email]
>>>>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>>>> --
>>>> Richard DeShong
>>>> Logic Tools
>>>> 510-642-5123 office
>>>> 925-285-1088 cell
>>>>
>>>>
>>>> _______________________________________________
>>>> FMPexperts mailing list
>>>> [hidden email]
>>>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>> --
>> Richard DeShong
>> Logic Tools
>> 510-642-5123 office
>> 925-285-1088 cell
>>
>>
>> _______________________________________________
>> FMPexperts mailing list
>> [hidden email]
>> http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
>
>--
>Richard DeShong
>Logic Tools
>510-642-5123 office
>925-285-1088 cell
>
>_______________________________________________
>FMPexperts mailing list
>[hidden email]
>http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au


_______________________________________________
FMPexperts mailing list
[hidden email]
http://lists.ironclad.net.au/listinfo.cgi/fmpexperts-ironclad.net.au
Loading...