Uploaded image for project: 'IoTivity'
  1. IoTivity
  2. IOT-2510

Hard to detect unit-test failure

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Undecided
    • Resolution: Won't Fix
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: Build System
    • Labels:
      None
    • Found in Version/s:
      1.3-rel
    • Operating System:
      Ubuntu
    • Issue Severity:
      Normal
    • Reproducibility:
      Often (51% - 99%)
    • Bugzilla ID:
      None

      Description

      Provisioned new new build system with Ubuntu, and ran

      python auto_build.py unit_tests
      

      The tail end of the run looked like this, followed by a command I ran:

      [----------] 1 test from CborHeterogeneousArrayTest
      [ RUN      ] CborHeterogeneousArrayTest.ConvertParseTest
      [       OK ] CborHeterogeneousArrayTest.ConvertParseTest (36 ms)
      [----------] 1 test from CborHeterogeneousArrayTest (36 ms total)
      
      [----------] Global test environment tear-down
      [==========] 5 tests from 2 test cases ran. (121 ms total)
      [  PASSED  ] 5 tests.
      G_DEBUG=gc-friendly G_SLICE=always-malloc valgrind --leak-check=full --suppressions=tools/valgrind/iotivity.supp --num-callers=24 --xml=yes --xml-file=resource_csdk_stack_test.memcheck /home/mats/iotivity/out/linux/x86_64/debug/resource/csdk/stack/test/stacktests
      Running main() from gtest_main.cc
      [==========] Running 99 tests from 16 test cases.
      [----------] Global test environment set-up.
      [----------] 4 tests from StackInit
      [ RUN      ] StackInit.StackInitNullAddr
      terminate called after throwing an instance of 'std::runtime_error'
        what():  deadman timer expired
      Aborted (core dumped)
      scons: *** [out/linux/x86_64/debug/resource/csdk/stack/test/utresource/csdk/stack/test/stacktests] Error 134
      [       OK ] RDTests.RDPublishResourceNullCB (25565 ms)
      [ RUN      ] RDTests.RDPublishResource
      [       OK ] RDTests.RDPublishResource (25555 ms)
      [ RUN      ] RDTests.RDPublishMultipleResources
      [       OK ] RDTests.RDPublishMultipleResources (25043 ms)
      [ RUN      ] RDTests.RDDeleteResourceNullAddr
      [       OK ] RDTests.RDDeleteResourceNullAddr (25596 ms)
      [ RUN      ] RDTests.RDDeleteResourceNullCB
      [       OK ] RDTests.RDDeleteResourceNullCB (25565 ms)
      [ RUN      ] RDTests.RDDeleteAllResource
      [       OK ] RDTests.RDDeleteAllResource (25560 ms)
      [ RUN      ] RDTests.RDDeleteSpecificResource
      [       OK ] RDTests.RDDeleteSpecificResource (25545 ms)
      [----------] 8 tests from RDTests (202450 ms total)
      
      [----------] Global test environment tear-down
      [==========] 8 tests from 1 test case ran. (202472 ms total)
      [  PASSED  ] 8 tests
      mats$ echo $?
      0
      mats$
      

      So a unit test run has blown up with an abort-core dump, yet there is no externally detectable error indication: the script has returned success. A human can see that something went wrong by examining a log (this is actually a failure we've seen lots of, so not any kind of an outlier), but how is an automated system going to make a decision based on a final success result? We need to clean up the plumbing so that an error is reported.

        Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

            • Assignee:
              mwichmann Mats Wichmann
              Reporter:
              mwichmann Mats Wichmann
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: